Twitter, Timelines, and the Open/Closed Principle

Consider this Tweet for a moment. I’ll be coming back to it at the end.

In my last post, I brought up Twitter’s rumored changes to the timeline feature as a poor example of customer awareness in connection with an attempt to innovate. The initial rumor set off a storm of protest that brought out CEO Jack Dorsey to deny (sort of) that the timeline will change. Today, the other shoe dropped, the timeline will change (sort of):

Essentially, it will be a re-implementation of the “While You Were Away” feature with an opt-out:

In the “coming weeks,” Twitter will turn on the feature for users by default, and put a notification in the timeline when it does, Haq says. But even then, you’ll be able to turn it off again.

Of course, Twitter’s expectation is that most people will like the timeline tweak—or at least not hate it—once they’re exposed to it. “We have the opt-out because we also prioritize user control,” Haq says. “But we do encourage people to give it a chance.”

So, what does this have to do with the Open/Closed Principle? The Wikipedia article for it contains a quote from Bertrand Meyer’s Object-Oriented Software Construction (emphasis is mine):

A class is closed, since it may be compiled, stored in a library, baselined, and used by client classes. But it is also open, since any new class may use it as parent, adding new features. When a descendant class is defined, there is no need to change the original or to disturb its clients.

Just as change to the code of class may disturb its clients, change to user experience of a product may disturb the clientele. Sometimes extension won’t work and change must take place. As it turns out, the timeline has been extended with optional behavior rather than changed unconditionally as was rumored.

Some thoughts:

  • Twitter isn’t the only social media application out there with a timeline for updates. Perhaps that chronological timeline (among other features) provides some value to the user base?
  • Assuming that value and the risk of upsetting the user base if that value was taken away, wouldn’t it have been wise to communicate in advance? Wouldn’t it have been even wiser to communicate when the rumor hit?

Innovation will involve change, but not all change is necessarily innovative. Understanding customer wants and needs is a good first step to identifying risky changes to user experience (whether real or just perceived). I’d argue this is even more pronounced when you consider that Twitter’s user base is really its product. Twitter’s customers are advertisers and data consumers who want and need an engaged, growing user base to view promoted Tweets and generate usage data.

Returning to the Tweet at the beginning of this post. Considering the accuracy of that recommendation, would it be reasonable to think turning over your timeline to their algorithms might degrade your user experience?

Innovation on Tap

Beer Tap

Two articles from the same site (CIO.com), both dealing with planned innovations, but with dramatically different results:

While the article about the Amazon leak doesn’t report on customer reactions, that response is unlikely to be negative for a variety of reasons, all of which involve benefit to the customer. The most important one, the big innovation (in my opinion), is that opening physical stores allows it to take advantage of a phenomenon that other retailers find problematic:

Another way Amazon eviscerates traditional retailers is via the practice of showrooming. That’s where you go to a brick-and-mortar store and find the product you want but then buy it on Amazon — sometimes standing right there in the store, using Amazon’s mobile app.

Physical Amazon stores can encourage and facilitate showrooming, because they will have friendly salespeople to help you find, install and use the app. Once they teach you how to showroom in an Amazon retail store, you can continue showrooming in other stores.

While large retailers have been able to “combat” showrooming by embracing it, selling items at online prices in physical stores digs deeply into profit margins. When your business model is predominantly brick and mortar, the hit is greater than when your model is predominantly online. In short, if they play their cards right, Amazon will be able to better serve their customers without hurting their own bottom line.

Awareness of your customers’ wants and needs is key to making decisions that are more like Amazon’s and less Twitter’s. An intentional, collaborative approach, such as that described by Greger Wikstrand in his post “Developing the ‘innovation habit’”, is one way to promote that awareness:

When I worked at Ericsson Research, I instigated a weekly one-hour innovation meeting for my team. ‘Can you really be innovative every Thursday at 9am?’ you may cynically ask. Well, actually, yes, you can.

What we did in that hour was commit to innovation, dedicating a place and a time to our individual and collective innovative mindsets. Sometimes this resulted in little ideas that helped us do things incrementally better. And sometimes—just sometimes—those innovation hours were the birthplace of big ideas.

Collaboration is important because “Architect knows best” is as much a design folly at the enterprise level as it is at the application level. A better model is that described by Tom Graves in “Auftragstaktik and fingerspitzengefühl”, where both information and guidance flow both up and down the hierarchy to inform decisions both strategic and tactical. These flows provides the context so necessary to making effective decisions.

You can’t pull a tap and draw a glass of innovation, but you can affect whether your system makes innovative ideas more or less likely to be recognized and acted on.

This is part eleven of a conversation Greger Wikstrand and I have been having about architecture, innovation, and organizations as systems. Previous posts in the series:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). He quotes a professor of his that asserted that you can’t change people or groups, thus you have to change the organization.
  9. “Changing Organizations Without Changing People” – I followed up on Greger’s post, agreeing that enterprise architectures must work “with the grain” of human nature and that culture is “walking the walk”, not just “talking the talk”.
  10. “Developing the ‘innovation habit’” – Greger talks about creating an intentional, collaborative innovation program.

Ignorance Isn’t Bliss, Just Good Tactics

Donkey

There’s an old saying about what happens when you assume.

The fast lane to asininity seems to run through the land of hubris. Anshu Sharma’s Tech Crunch article, “Why Big Companies Keep Failing: The Stack Fallacy”, illustrates this:

Stack fallacy has caused many companies to attempt to capture new markets and fail spectacularly. When you see a database company thinking apps are easy, or a VM company thinking big data is easy  — they are suffering from stack fallacy.

Stack fallacy is the mistaken belief that it is trivial to build the layer above yours.

Why do people fall prey to this fallacy?

The stack fallacy is a result of human nature  — we (over) value what we know. In real terms, imagine you work for a large database company  and the CEO asks , “Can we compete with Intel or SAP?” Very few people will imagine they can build a computer chip just because they can build relational database software, but because of our familiarity with building blocks of the layer up,  it is easy to believe you can build the ERP app. After all, we know tables and workflows.

The bottleneck for success often is not knowledge of the tools, but lack of understanding of the customer needs. Database engineers know almost nothing about what supply chain software customers want or need.

This kind of assumption can cost an ISV a significant amount of money and a lot of good will on the part of the customer(s) they attempt to disrupt. Assumptions about the needs of the customer (rather than the customer’s customer) can be even more expensive. The smaller your pool of customers, the more damage that’s likely to result. Absent a captive customer circumstance, incorrect assumptions in the world of bespoke software can be particularly costly (even if only in terms of good will). Even comprehensive requirements are of little benefit without the knowledge necessary to interpret them:

But, that being said:

This would seem to pose a dichotomy: domain knowledge as both something vital and an impediment. In reality, there’s no contradiction. As the old saying goes, “a little knowledge is a dangerous thing”. When we couple that with another cliche, “familiarity breeds contempt”, we wind up with Sharma’s stack fallacy, or as xkcd expressed it:

'Purity' on xkcd.com

In order to create and evolve effective systems, we obviously have a need for domain knowledge. We also have a need to understand that what we possess is not domain knowledge per se, but domain knowledge filtered through (and likely adulterated by) our own experiences and biases. Without that understanding, we risk what Richard Martin described in “The myopia of expertise”:

In the world of hyperspecialism, there is always a danger that we get stuck in the furrows we have ploughed. Digging ever deeper, we fail to pause to scan the skies or peer over the ridge of the trench. We lose context, forgetting the overall geography of the field in which we stand. Our connection to the surrounding region therefore breaks down. We construct our own localised, closed system. Until entropy inevitably has its way. Our system then fails, our specialism suddenly rendered redundant. The expertise we valued so highly has served to narrow and shorten our vision. It has blinded us to potential and opportunity.

The Clean Room pattern on CivicPatterns.org puts it this way:

Most people hate dealing with bureaucracies. You have to jump through lots of seemingly pointless hoops, just for the sake of the system. But the more you’re exposed to it, the more sense it starts to make, and the harder it is to see things through a beginner’s eyes.

So, how do we get those beginner’s eyes? Or, at least, how do we get closer to having a beginner’s eyes?

The first step is to reject the notion of our own understanding of the problem space. Lacking innate understanding, we must then do the hard work of determining what the architecture of the problem, our context, is. As Paul Preiss noted, this doesn’t happen at a desk:

Architecture happens in the field, the operating room, the sales floor. Architecture is business technology innovation turned to strategy and then executed in reality. Architecture is reducing the time it takes to produce a barrel of oil, decreasing mortality rates in the hospital, increasing product margin.

Being willing to ask “dumb” questions is just as important. Perception without validation may be just an assumption. Seeing isn’t believing. Seeing and validating what you’ve seen, is believing.

It’s equally important to understand that validating our assumptions goes beyond just asking for requirements. Stakeholders can be subject to biases and myopic viewpoints as well. It’s true that Henry Ford’s customers would probably have asked for faster horses, it’s also true that, in a way, that’s exactly what he delivered.

We earn our money best when we learn what’s needed and synthesize those needs into an effective solution. That learning is dependent on communication unimpeded by our pride or prejudice:

#ShadowSocialMedia or Why Won’t People Use the Product the Way They’re Supposed to

Scott Berkun dislikes the way people are using images to bypass Twitter’s 140 character limit:

His point is very valid, but:

Which is the issue. Sometimes there’s a need to go beyond that limit. Sure, you can chunk your thoughts up across multiple tweets, but users find it burdensome to respect Twitter’s constraint on the amount of text per tweet. Constrained customers, assuming they stick with a product, tend to come up with “creative” solutions to that product’s shortcomings that reflect what they value. The customers’ values may well conflict with the developers’. When “conflict” and “customer” are in the same sentence, there’s generally a problem..

Berkun’s response to @honatwork‘s rebuttal nearly captures the issue:

I say “nearly”, because Twitter was built long before 2015. The problem is that it’s 2015 and Twitter has not evolved to meet a need that clearly exists.

In the IT world, it’s common to hear terms like “Shadow IT” or “Rogue IT”. Both refer to users (i.e. customers) going beyond the pale of approved tools and techniques to meet a need. This poses a problem for IT in that the customer’s solution may not incorporate things that IT values and retrofitting those concerns later is far more difficult. Taking a “products, not projects” approach can minimize the need for customer “creativity”, for in-house IT and external providers.

Trying to hold back the tide just won’t work, because the purpose of the system is to meet the customers’ needs, not respect the designers’ intent.

Product Owner or Live Customer?

Wooden Doll

Who calls the shots for a given application? Who gets to decide which features go on the roadmap (and when) and which are left by the curb? Is there one universal answer?

Many of my recent posts have dealt in some way with the correlation between organizational structure and application/solution architecture. While this is largely to be expected due to Conway’s Law, as I noted in “Making and Taming Monoliths”, many business processes depend on other business processes. While a given system may track a particular related set of business capabilities, concepts belonging to other capabilities and features related to maintaining them, tend to creep in. This tends to multiply the number of business stakeholders for any given system and that’s before taking into account the various IT groups that will also have an interest in the system.

Scrum has the role of Product Owner, whose job it is to represent the customer. Alan Holub, in a blog post for Dr. Dobb’s, terms this “flawed”, preferring Extreme Programming’s (XP) requirement for an on-site customer. For Holub, introducing an intermediary introduces risk because the product owner is not a true customer:

Agile processes use incremental design, done bit by bit as the product evolves. There is no up-front requirements gathering, but you still need the answers you would have gleaned in that up-front process. The question comes up as you’re working. The customer sitting next to you gives you the answers. Same questions, same answers, different time.

Put simply, agile processes replace up-front interviews with ongoing interviews. Without an on-site customer, there are no interviews, and you end up with no design.

For Holub, the product owner introduces either guesses or delay. Worst of all in his opinion is when the product owner is “…just the mouthpiece of a separate product-development group (or a CEO) who thinks that they know more about what the customer wants than the actual customer…”. For Holub, the cost of hiring an on-site customer is negligible when you factor in the risks and costs of wrong product decisions.

Hiring a customer, however, pretty much presumes an ISV environment. For in-house corporate work, the customer is already on board. However, in both cases, two issues exist: one is that a customer that is the decision-maker risks giving the product a very narrow appeal; the second is that the longer the “customer” is away from being a customer, the currency of their knowledge of the domain diminishes. Put together, you wind up with a product that reflects one person’s outdated opinion of how to solve an issue.

Narrowness of opinion should be sufficient to give pause. As Jim Bird observed in “How Product Ownership works in the Real World”:

There are too many functional and operational and technical details for one person to understand and manage, too many important decisions for one person to make, too many problems to solve, too many questions and loose ends to follow-up on, and not enough time. It requires too many different perspectives: business needs and business risks, technical dependencies and technical risks, user experience, supportability, compliance and security constraints, cost factors. And there are too many stakeholders who need to be consulted and managed, especially in a big, live business-critical system.

As is the case with everything related to software development, there is a balance to be met. Coherence of vision enhances design to a point, but then can lead to a too-insular focus. Mega-monoliths that attempt to deal comprehensively with every aspect of an enterprise, if they ever launch, are unwieldy to maintain. By the same token, business dependencies become system dependencies and cannot be eliminated, only re-arranged (microservices being a classic example – the dependencies are not removed, only their location has changed). The idea that we can have only one voice directing development is, in my opinion, just too simplistic.

“Legacy Systems – Constraints, Conflicts, and Customers” on Iasa Global Blog

Crade to the grave (in 6.2 seconds)

As I was reading Roger Sessions’ latest white paper, “The Thirteen Laws of Highly Complex IT Systems”, Laws 1 and 2 immediately caught my eye:

Law 1. There are three categories of complexity: business, architectural and implementation.

Law 2. The three categories of complexity are largely independent of each other.

That complexity in these categories can vary independently (e.g. complex business processes can be designed and implemented simply just as simple processes can be designed and implemented in an extremely complex manner) is important to the understanding of complexity in IT.

See the full post on the Iasa Global Blog (a re-post, originally published here).

Design Follies – ‘Why can’t I do that?’

Man in handcuffs

It’s ironic that the traits we think of as making a good developer are also those that can get in the way of design and testing, but that’s just the case. Think of how many times you’ve heard (or perhaps, said) “no one would ever do that”. Yet, given the event-driven, non-linear nature of modern systems, if a given execution path can occur, it will occur. Our cognitive biases can blind us to potential issues that arise when our product is used in ways we did not intend. As Thomas Wendt observed in “The Broken Worldview of Experience Design”:

To a certain extent, the designer’s intent is irrelevant once the product launches. That is, intent can drive the design process, but that’s not the interesting part; the ways in which users adopt the product to their own needs is where the most insight comes from. Designer intent is a theoretical, speculative formulation even when based on the most rigorous research methods and valid interpretations. That is not to say intention and strategic positioning is not important, but simply that we need to consider more than idealized outcomes.

Abhi Rele, in “APIs and Data: Journey to the Center of the Customer Experience”, put it in more concrete terms:

If you think you’re in full control of your customers’ experience, you’re wrong.

Customers increasingly have taken charge—they know what they want, when they want it, and how they want it. They are using their mobile phones more often for an ever-growing list of tasks—be it searching for information, looking up directions, or buying products. According to Google, 34% of consumers turn to the device that’s closest to them. More often than not, they’re switching from one channel or device mid-transaction; Google found that 67% of consumers do just that. They might start their product research on the web, but complete the purchase on a smartphone.

Switch device in mid-transaction? No one would ever do that! Oops.

We could, of course, decide to block those paths that we don’t consider “reasonable” (as opposed to stopping actual error conditions). The problem with that approach, is that our definition of “reasonable” may conflict with the customer’s definition. When “conflict” and “customer” are in the same sentence, there’s generally a problem.

These conflicts, in the right domain, can even have deadly results. While investigating the Asiana Airlines crash from July of 2013, one of the findings of the National Transportation Safety Board (NTSB) was that the crew’s belief of what the autopilot system would do did not coincide with what it actually did (my emphasis):

The NTSB found that the pilots had “misconceptions” about the plane’s autopilot systems, specifically what the autothrottle would do in the event that the plane’s airspeed got too low.

In the setting that the autopilot was in at the time of the accident, the autothrottles that are used to maintain specific airspeeds, much like cruise control in a car, were not programmed to wake up and intervene by adding power if the plane got too slow. The pilots believed otherwise, in part because in other autopilot modes on the Boeing 777, the autothrottles would in fact do this.

“NTSB Blames Pilots in July 2013 Asiana Airlines Crash” on Mashable.com

Even if it doesn’t contribute to a tragedy, a poor user experience (inconsistent, unstable, or overly restrictive) can lead to unintended consequences, customer dissatisfaction, or both. Basing that user experience on assumptions instead of research and/or testing increases the risk. As I’ve stated previously, risky assumptions are an assumption of risk.