Twitter, Timelines, and the Open/Closed Principle

Consider this Tweet for a moment. I’ll be coming back to it at the end.

In my last post, I brought up Twitter’s rumored changes to the timeline feature as a poor example of customer awareness in connection with an attempt to innovate. The initial rumor set off a storm of protest that brought out CEO Jack Dorsey to deny (sort of) that the timeline will change. Today, the other shoe dropped, the timeline will change (sort of):

Essentially, it will be a re-implementation of the “While You Were Away” feature with an opt-out:

In the “coming weeks,” Twitter will turn on the feature for users by default, and put a notification in the timeline when it does, Haq says. But even then, you’ll be able to turn it off again.

Of course, Twitter’s expectation is that most people will like the timeline tweak—or at least not hate it—once they’re exposed to it. “We have the opt-out because we also prioritize user control,” Haq says. “But we do encourage people to give it a chance.”

So, what does this have to do with the Open/Closed Principle? The Wikipedia article for it contains a quote from Bertrand Meyer’s Object-Oriented Software Construction (emphasis is mine):

A class is closed, since it may be compiled, stored in a library, baselined, and used by client classes. But it is also open, since any new class may use it as parent, adding new features. When a descendant class is defined, there is no need to change the original or to disturb its clients.

Just as change to the code of class may disturb its clients, change to user experience of a product may disturb the clientele. Sometimes extension won’t work and change must take place. As it turns out, the timeline has been extended with optional behavior rather than changed unconditionally as was rumored.

Some thoughts:

  • Twitter isn’t the only social media application out there with a timeline for updates. Perhaps that chronological timeline (among other features) provides some value to the user base?
  • Assuming that value and the risk of upsetting the user base if that value was taken away, wouldn’t it have been wise to communicate in advance? Wouldn’t it have been even wiser to communicate when the rumor hit?

Innovation will involve change, but not all change is necessarily innovative. Understanding customer wants and needs is a good first step to identifying risky changes to user experience (whether real or just perceived). I’d argue this is even more pronounced when you consider that Twitter’s user base is really its product. Twitter’s customers are advertisers and data consumers who want and need an engaged, growing user base to view promoted Tweets and generate usage data.

Returning to the Tweet at the beginning of this post. Considering the accuracy of that recommendation, would it be reasonable to think turning over your timeline to their algorithms might degrade your user experience?

, , , , , , , , ,

Leave a comment

Innovation on Tap

Beer Tap

Two articles from the same site (, both dealing with planned innovations, but with dramatically different results:

While the article about the Amazon leak doesn’t report on customer reactions, that response is unlikely to be negative for a variety of reasons, all of which involve benefit to the customer. The most important one, the big innovation (in my opinion), is that opening physical stores allows it to take advantage of a phenomenon that other retailers find problematic:

Another way Amazon eviscerates traditional retailers is via the practice of showrooming. That’s where you go to a brick-and-mortar store and find the product you want but then buy it on Amazon — sometimes standing right there in the store, using Amazon’s mobile app.

Physical Amazon stores can encourage and facilitate showrooming, because they will have friendly salespeople to help you find, install and use the app. Once they teach you how to showroom in an Amazon retail store, you can continue showrooming in other stores.

While large retailers have been able to “combat” showrooming by embracing it, selling items at online prices in physical stores digs deeply into profit margins. When your business model is predominantly brick and mortar, the hit is greater than when your model is predominantly online. In short, if they play their cards right, Amazon will be able to better serve their customers without hurting their own bottom line.

Awareness of your customers’ wants and needs is key to making decisions that are more like Amazon’s and less Twitter’s. An intentional, collaborative approach, such as that described by Greger Wikstrand in his post “Developing the ‘innovation habit’”, is one way to promote that awareness:

When I worked at Ericsson Research, I instigated a weekly one-hour innovation meeting for my team. ‘Can you really be innovative every Thursday at 9am?’ you may cynically ask. Well, actually, yes, you can.

What we did in that hour was commit to innovation, dedicating a place and a time to our individual and collective innovative mindsets. Sometimes this resulted in little ideas that helped us do things incrementally better. And sometimes—just sometimes—those innovation hours were the birthplace of big ideas.

Collaboration is important because “Architect knows best” is as much a design folly at the enterprise level as it is at the application level. A better model is that described by Tom Graves in “Auftragstaktik and fingerspitzengefühl”, where both information and guidance flow both up and down the hierarchy to inform decisions both strategic and tactical. These flows provides the context so necessary to making effective decisions.

You can’t pull a tap and draw a glass of innovation, but you can affect whether your system makes innovative ideas more or less likely to be recognized and acted on.

This is part eleven of a conversation Greger Wikstrand and I have been having about architecture, innovation, and organizations as systems. Previous posts in the series:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). He quotes a professor of his that asserted that you can’t change people or groups, thus you have to change the organization.
  9. “Changing Organizations Without Changing People” – I followed up on Greger’s post, agreeing that enterprise architectures must work “with the grain” of human nature and that culture is “walking the walk”, not just “talking the talk”.
  10. “Developing the ‘innovation habit’” – Greger talks about creating an intentional, collaborative innovation program.

, , , , , ,

1 Comment

Ignorance Isn’t Bliss, Just Good Tactics


There’s an old saying about what happens when you assume.

The fast lane to asininity seems to run through the land of hubris. Anshu Sharma’s Tech Crunch article, “Why Big Companies Keep Failing: The Stack Fallacy”, illustrates this:

Stack fallacy has caused many companies to attempt to capture new markets and fail spectacularly. When you see a database company thinking apps are easy, or a VM company thinking big data is easy  — they are suffering from stack fallacy.

Stack fallacy is the mistaken belief that it is trivial to build the layer above yours.

Why do people fall prey to this fallacy?

The stack fallacy is a result of human nature  — we (over) value what we know. In real terms, imagine you work for a large database company  and the CEO asks , “Can we compete with Intel or SAP?” Very few people will imagine they can build a computer chip just because they can build relational database software, but because of our familiarity with building blocks of the layer up,  it is easy to believe you can build the ERP app. After all, we know tables and workflows.

The bottleneck for success often is not knowledge of the tools, but lack of understanding of the customer needs. Database engineers know almost nothing about what supply chain software customers want or need.

This kind of assumption can cost an ISV a significant amount of money and a lot of good will on the part of the customer(s) they attempt to disrupt. Assumptions about the needs of the customer (rather than the customer’s customer) can be even more expensive. The smaller your pool of customers, the more damage that’s likely to result. Absent a captive customer circumstance, incorrect assumptions in the world of bespoke software can be particularly costly (even if only in terms of good will). Even comprehensive requirements are of little benefit without the knowledge necessary to interpret them:

But, that being said:

This would seem to pose a dichotomy: domain knowledge as both something vital and an impediment. In reality, there’s no contradiction. As the old saying goes, “a little knowledge is a dangerous thing”. When we couple that with another cliche, “familiarity breeds contempt”, we wind up with Sharma’s stack fallacy, or as xkcd expressed it:

'Purity' on

In order to create and evolve effective systems, we obviously have a need for domain knowledge. We also have a need to understand that what we possess is not domain knowledge per se, but domain knowledge filtered through (and likely adulterated by) our own experiences and biases. Without that understanding, we risk what Richard Martin described in “The myopia of expertise”:

In the world of hyperspecialism, there is always a danger that we get stuck in the furrows we have ploughed. Digging ever deeper, we fail to pause to scan the skies or peer over the ridge of the trench. We lose context, forgetting the overall geography of the field in which we stand. Our connection to the surrounding region therefore breaks down. We construct our own localised, closed system. Until entropy inevitably has its way. Our system then fails, our specialism suddenly rendered redundant. The expertise we valued so highly has served to narrow and shorten our vision. It has blinded us to potential and opportunity.

The Clean Room pattern on puts it this way:

Most people hate dealing with bureaucracies. You have to jump through lots of seemingly pointless hoops, just for the sake of the system. But the more you’re exposed to it, the more sense it starts to make, and the harder it is to see things through a beginner’s eyes.

So, how do we get those beginner’s eyes? Or, at least, how do we get closer to having a beginner’s eyes?

The first step is to reject the notion of our own understanding of the problem space. Lacking innate understanding, we must then do the hard work of determining what the architecture of the problem, our context, is. As Paul Preiss noted, this doesn’t happen at a desk:

Architecture happens in the field, the operating room, the sales floor. Architecture is business technology innovation turned to strategy and then executed in reality. Architecture is reducing the time it takes to produce a barrel of oil, decreasing mortality rates in the hospital, increasing product margin.

Being willing to ask “dumb” questions is just as important. Perception without validation may be just an assumption. Seeing isn’t believing. Seeing and validating what you’ve seen, is believing.

It’s equally important to understand that validating our assumptions goes beyond just asking for requirements. Stakeholders can be subject to biases and myopic viewpoints as well. It’s true that Henry Ford’s customers would probably have asked for faster horses, it’s also true that, in a way, that’s exactly what he delivered.

We earn our money best when we learn what’s needed and synthesize those needs into an effective solution. That learning is dependent on communication unimpeded by our pride or prejudice:

, , , , , , , , , ,

Leave a comment

The Business of IT – Customers, Clients, and Fit for Purpose

Nesting Doll

Over the past few months, I have touched on a variety of what might seem to be disparate topics: the need for architects (or at least architectural design), estimates, organizations as systems/enterprise architecture, customer-centricity, and IT management and governance. I suspect the trend will continue for a while, so it’s time for a post to explicitly tie things together.

The concept of “organizations as systems” is the cornerstone. Software systems do not exist in isolation, but within a fractal set of contexts, each of which are systems themselves. These systems serve as the ecosystem for those they contain and will ultimately end in one or more social systems. Understanding the ecosystem is critical to ensuring that its component systems are fit for purpose, otherwise we risk trying to introduce fish into a desert.

The social system context is, in my opinion, key. Information technology delivery can take place in a wide variety of ways, but a useful high-level differentiation is that between delivery of an off-the-shelf product versus delivery of a service. Is the customer a client?

These two sentences, left alone, could be a source of confusion. Precision isn’t one of the English language’s strong points. Not all products are off-the-shelf, sometimes the service we deliver is the creation of a bespoke product (ultimately, the end user is concerned with the product that they will use, not the project by which we created it). Likewise, in my opinion, clients are customers, just not all customers are clients. In my career, I’ve exclusively dealt with client customers, so I tend to use the two terms interchangeably. That becomes a problem when we have a client that isn’t being treated like a client. Seth Godin’s recent post explained the difference nicely:

The customer buys (or doesn’t buy) what you make.

The client asks you to make something.

The customer has the power to choose, but the client has the power to define, insist and spec.

There is a large number of potential customers, and you make for them before you know precisely who they are.

There are just a relative handful of clients, though, and your work happens after you find them.

If a customer doesn’t like what’s on offer, she can come back tomorrow. If the client doesn’t like what you deliver, she might leave forever.

When a vendor fails to treat its customers as clients, they risk losing the client. When an in-house IT organization treats fails to treat its captive customers as clients, the result is “Shadow IT”. When that result is complained about, it betrays a shocking lack of awareness on the part of the complainer.

A client will have a hierarchy of concerns in relation to IT. The client’s main interest is the job that they’re doing (or will be doing) with a product. Their concern with the product will be shaped by that, as will their concern with the process by which the product is delivered. We shouldn’t be surprised when things that don’t promote the client’s welfare (or, at least, aren’t presented in that manner) are met with a lack of interest. We definitely shouldn’t be surprised when things that are seen as a net negative to the customer are met with hostility. Understanding and respecting the client’s point of view is required to avoid damaging our credibility and relationship with them.

Meeting the needs of my clients is my objection to #NoEstimates. Clients may have questions that require estimates to answer. In “Estimates – Uses, Abuses, and that Credibility Thing Again”, I listed several that were taken from a post by a prominent #NoEstimates advocate. In my opinion, those questions become my concern by virtue of being my client’s concern. Suggesting, as that advocate does, that they need different questions, is not serving their needs.

Just as the infrastructure environment can influence design decisions, so can the social environment. While it is certainly possible to deliver IT in an environment where the clients’ needs are largely ignored, doing so is detrimental to the clients, to the providers, and to the organization(s) to which they belong. We wouldn’t try to force a square peg in a round hole and we shouldn’t try to force ill-fitting solutions on those who depend on our work. Products generally work best when they’re designed to be used in the way they’re used. The same holds true for services.

[Recursive Matrena 01 image by Vahram_Mekhitarian via Wikimedia Commons]

, , , , ,

1 Comment

OODA vs PDCA – What’s the Difference?


In my post “Architecture and OODA Loops – Fast is not Enough”, I stated that sense-making and decision-making were critical skills for the practice of software architecture. I further stated that I found the theories of John Boyd, particularly his OODA loop, useful in understanding and describing effective sense-making and decision-making. My conclusion was that in order to decide and act in the most effective manner possible, one must observe the context as effectively as possible and orient, or make sense of those observations, as effectively as possible. In other words, the quality of decision depended on the quality of the cognition.

One of the many things that I enjoy about blogging and engaging on media like Twitter and LinkedIn is the feedback. The questions and comments I get in response to one post ensure that I don’t have to worry about writer’s block getting in the way of my next one. In this instance, Greger Wikstrand obliged, providing the topic for this post:

As I replied to Greger, I was familiar with the PDCA cycle and it has some similarities to Boyd’s OODA loop, but I preferred OODA for a variety of reasons. Before I get into those reasons, however, it would be useful to lay out what distinguishes the two methods. In a guest post on the blog Slightly East of New, “PDCA vs. OODA — Why not take both?”, Deane Lenane does just this:

Some people who are familiar with the canon of both of these men’s work, often fall into the error of seeing the O-O-D-A loop as a function of the P-D-C-A loop or vice versa. I think this is a mistake.

The P-D-C-A cycle or loop is primarily an analytical approach that can be used with great success in a completely internal manner. One does not need to consult the external environment or adjust to unfolding circumstances to make the P-D-C-A loop work. P-D-C-A can be used with great success on the shop floor with the data that is available. Analysis which involves the use of a more or less complete data set to reach a conclusion. We use the data to make a decision about how to proceed, we than check and act to confirm or reject the hypothesis that our analysis has led us to.

O-O-D-A is more concerned with synthesizing an action out of an incomplete data set. Since we can never recognize all of the variables that we are forced to deal with in any environment, we must be able to make a decision that we believe will give us the highest probability for success. The synthesis of an action from the observation and orientation of a complex and mysterious environment, subject to frequent and unpredictable change, is the essence of the O-O-D-A loop.

My conclusion is that P-D-C-A is primarily involved with analysis perhaps using some synthesis and that O-O-D-A is primarily involved with synthesis using all of the analytical data points possible but considering that the data set will always be largely incomplete.

Three aspects of Lenane’s differentiation point me toward OODA: “environment” (i.e. the evaluation of the external context we must deal with rather than the more inward looking PDCA), “synthesizing an action”, and “incomplete data set”. Systems exist in an environment, an ecosystem with a back-story. Failing to account for those ensures problems if not outright failure. Likewise, our aim is to make a decision to the best of our ability under uncertainty. These concepts mesh nicely with the practice of architecture in my opinion.

Another aspect relates back to the PCA acronym in Greger’s tweet. That took some research for us to find a good resource, because:

However, Greger did find one (slide 3 of this deck). Essentially, it states that Perception of the world leads to Cognition which leads to Action that changes the world. Perception being the key word here. How we perceive reality is arguably more important than reality itself, because that perception will color our thinking that drives our action. The Orient portion of the OODA explicitly recognizes that our observations are filtered through our experience, education, biases, etc. Understanding and adjusting for this (to the extent we can) is important. As Tom Graves observed in “Enterprise-architect – applied-scientist, or alchemist?”:

Perhaps the most important thing here is to notice things that don’t fit our expectations – they’re often very easy to miss, especially given Gooch’s Paradox, that “things not only have to be seen to believed, but also have to be believed to be seen”.

Software architecture involves making decisions in the presence of uncertainty. In order to make the best decisions possible, we need to have the best possible grasp of our context (n.b. remembering that we are part of that context). We also need to remember that the context is not static. Each action (or inaction, for that matter) can lead to the emergence of some new issue. We can’t operate with a “one and done” philosophy.

[PDCA Loop diagram by Tagimaguiter via Wikimedia Commons]

, , , , ,

1 Comment

Form Follows Function on SPaMCast 377


This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 377, features Tom’s essay on empathy, Kim Pries talking about the application of David Allen’s concepts for Getting Things Done, and the first Form Follows Function installment for 2016 on organizations and innovation.

Tom and I discuss my post “Changing Organizations Without Changing People”, talking about the need to work with, not against, human nature in the design and operation of organizations.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

, , ,

Leave a comment

Architecture and OODA Loops – Fast is not Enough

OODA Loop Diagram

Sense-making and decision-making are critical skills for the practice of software architecture. Creating effective solutions (i.e. the collection of design decisions that make up the product) is dependent on understanding the architecture of the problem. In other words, the quality of our decisions depends on the quality of our understanding of the context those decisions were intended to address. As Paul Homer recently observed in his post “Thinking Problems”:

Thinking well is a prerequisite for handling complex problems, with or without technology. If people can’t clearly understand the issues, then any attempt to solve them is as equally likely to just make them worse. If they break down the problem into a convoluted mess of ill fitting parts, then any solutions comprised of these is unlikely to work effectively, and is more likely to make things worse. Software, as it is applied to sophisticated problems, is inordinately complex and as such can not be constructed without first arriving at a strong understanding of the underlying context. This comes from observations, but it doesn’t come together until enough thinking has been applied to it to get it into a usable state.

Systems, both automated and social (particularly social), tend to display complex, emergent behavior. The ecosystem for automated systems is a social system that may comprise other social systems. The development and operation of automated systems is, in itself, a social system. Agility in sense-making, decision-making, and execution is critical to effective operation in this environment.

This need for agility is why, when it comes to thinking about thinking, one of the theorists whose ideas I pay particular attention to is John Boyd. Boyd, a US Air Force fighter pilot, gained fame for his theories of cognition. Although initially developed in relation to the tactics of aerial combat, they were found to have a wider application (applying to strategy as well and not being limited to military domains). At its core, the theory was that to defeat an opponent, one must get inside their OODA (Observe, Orient, Decide, and Act) loop. In other words, the person that could observe circumstances, make sense of them (orient), decide on a response and then act on that decision quickest would have the advantage.

Simple explanations of Boyd’s theories are problematic. As Tom Graves points out in his post “Seven sins, sensemaking and OODA”:

…the OODA ‘loop’ isn’t a linear loop – it’s fractal, recursive, re-entrant, looping through itself at multiple levels all at the same time. There’s sensemaking within action, action within sensemaking, decisions affecting how we sense and observe. It’s fractal.

My use of the word “quickest” earlier in the post regarding acting on a decision poses a problem. As I noted in the title, fast is not enough. One could gain speed by just acting without any observation, orientation, or decision. In my opinion, however, the result would most likely be flailing blindly. Too little analysis is detrimental.

Likewise, though, trying to deal with too much information poses a problem. From a conversation with Greger Wikstrand:

Greger rightly observes that the information available is out of our control, but our decision to filter or process it is under our control. By intentionally and intelligently filtering, we can traverse the loop quicker and more effectively.

Whether engaged in application architecture, solution architecture, or enterprise architecture, sense-making and decision-making are crucial components of the practice. That sense-making and decision-making is a continual process throughout the lifetime of the system, not just an initial or occasional phase. Constant understanding of the context the system exists within is vital because:

[OODA Loop diagram by Patrick Edwin Moran via Wikimedia Commons]

, , , ,



Get every new post delivered to your Inbox.

Join 173 other followers

%d bloggers like this: