Strategic Tunnel Vision

Mouth of a Tunnel

 

Change and innovation are topics that have been prominent on this blog over the last year. In fact, Greger Wikstrand and I have traded a total of twenty-six posts (twenty-seven counting this one) on the subject.

Greger’s last post, “Successful digitization requires focus on the entire customer experience – not just a neat app” (it’s in Swedish, but it translates well to English), discussed the critical nature of customer experience to digital innovation. According to Greger, without taking customer experience into account:

One can make the world’s best app without getting more, more satisfied and profitable customers. It’s like trying to make a boring games more exciting by spraying gold paint on the playing pieces.

Change and innovation are not the same thing. Change is inevitable, innovation is not (with a h/t to Tom Cagley for that quote). As Greger pointed out in his latest article, to get improved customer experience, you need depth. Sprinkling digital fairy dust over something is not likely result in innovation. New and different can be really great, but new and different solely for the sake of new and different doesn’t win the prize. Context is critical.

If you’ve read more than a couple of my posts, you’ve probably realized that among my rather varied interests, history is a major one. I lean heavily on military history in particular when discussing innovation. This post won’t break with that tradition.

The blog Defense in Depth, operated by the Defence Studies Department, King’s College London, has published two posts this week dealing with the Suez Crisis of 1956, primarily in terms of the Anglo-French forces. One deals with the land operations and the other with naval operations. They struck a chord because they both illustrated how an overreaction to change can have drastic consequences from the strategic level down to the tactical.

Buying into a fad can be extremely expensive.

The advent of the nuclear age at the end of World War II dramatically transformed military and political thought. The atomic bomb was the ultimate game-changer in that respect. In the time-honored tradition, the response was over-reaction. “Atomic” was the “digital” of the late 40s into the 60s. They even developed a recoilless gun that could launch a 50 pound nuclear warhead 1.25-2.5 miles. “Move fast and break things” was serious business back in the day.

This extreme focus on what had changed, however, led to a rather common problem, tunnel vision. Nuclear capability became such an overarching consideration that other capabilities were neglected. Due to this neglect of more conventional capabilities, the UK’s forces were seriously hampered in their ability to perform their mission effectively. Misguided thinking at the strategic level affected operations all the way down to the lowest tactical formations.

It’s easy to imagine present-day IT scenarios that fall prey to the same issues. A cloud or digital initiative given top priority without regard to maintaining necessary capabilities could easily wind up failing in a costly manner and impairing the existing capability. It’s important to understand that time, money, and attention are finite resources. Adding capability requires increasing the resources available for it, either through adding new resources or freeing up existing ones by reducing the commitment to less important capabilities. If there is no real appreciation of what capabilities exist and what the relative value of each is, making this decision becomes a shot in the dark.

Situational awareness across all levels is required. To be effective, that awareness must integrate changes to the context while not losing sight of what already was. Otherwise, to use a metaphor from my high school football days, you risk acting like a “blind dog in a meat-packing plant”.

Advertisement

The Hidden Cost of Cheap – UX and Internal Applications

Sisyphus by Titian

Why would anyone worry about user experience for anything that’s not customer-facing?

This question was the premise of Maurice Roach’s post in the Zühlke blog, “Empathise with your users or you won’t solve their problems”:

Bring up the subject of user empathy with some engineers or product owners and you’ll probably hear comments that fall into one of the following categories:

  • Why do we need to empathise when the requirements tell us all we need to know about the problem at hand?
  • Is this really going to improve anything?
  • Sounds like an expensive waste of time
  • They’ll have to use whatever they’re given

These aren’t unexpected responses, it’s easy to put empathy into the “touchy feely”, “let’s all hug and get along” box of product management.

Roach’s answers:

Empathy does a number of things, but mainly it increases the likelihood that the delivery team will think of a user and their pain points when delivering a feature.

If an engineer, UX designer or product owner will has sat with a user, watched them interact with their current software or device they will have an understanding of their frustrations, concerns and impediments to success. The team will be focused on creating features with the things they have witnessed in mind, they’re thinking about how their software will affect a human being and no amount of requirement documentation will give them that emotional connection.

Empathy can also help to develop a shared trust in the application development process. The users see that the delivery team are interested in helping to solve their problems and the product delivery team see the real users behind the application.

All of these are valid reasons, but the list is incomplete. All of these answer the question from a software development point of view. To his credit, Roach pushes past the purely technical aspects into the world of the user. This expanded exploration of the context is, in my opinion, absolutely essential. What’s presented above is an IT-centric viewpoint that needed to be married with a business-centric viewpoint in order to get a fuller picture.

Nick Shackleton-Jones, in his post “The Future Is… Organisational Usability!”, outlined on the problem:

Here’s how your organisation works: you hire people who are increasingly used to a world where they can do pretty much anything via an app on their iPhone, and you subject them to a blizzard of process, policy, antiquated systems and outdated ways of working which pretty much stop them in their tracks, leaving them unproductive and demoralised. Frankly, it’s a miracle they manage to accomplish anything at all.

As he notes, enterprises are putting a lot of effort into digital initiatives aimed at making it easier for customers to engage with them. However:

…if we are going to be successful in future we need to make it much easier for our people to do their jobs: because they are going to be spending less time with us, and because we want engagement and retention, and because if we require high levels of capability (to work our complex systems) then our resourcing costs will go through the roof. We have to simplify ‘getting stuff done’. To put it another way: in an ideal world, any job in your organisation should be do-able by a 12-yr old.

While I disagree that “any job in your organisation should be do-able by a 12-yr old”, Shackleton-Jones point is well-taken that it is in the interests of the business to make it easier for people to do their jobs. All aspects of the system, whether organizational, procedural or technological, should be facilitating, not hindering, the mission. Self-inflicted, unnecessary impediments are morale-killers and degrade both effectiveness and efficiency. All three of these directly impact customer-experience.

While this linkage between employee user experience and customer experience makes usability important for line of business systems (both technological and social), it has value for peripheral systems as well. Time people spend on ancillary tasks (filling out time sheets, requesting supplies, etc.) is time not spent on their primary duties. You may not be able to eliminate those tasks, but you can minimize their expense by making them quick and easy to complete. The further someone’s knowledge/skill/experience level gets from “do-able by a 12-yr old”, the bigger the savings by paying attention to this.

Rather than asking if you can afford to pay attention to user experience, you might want to ask whether you can afford not to.

Innovation – What’s Old can be New Again

Roman Road Ruins

There’s an old rhyme about what a bride should wear for luck on her wedding day: “Something old, something new, something borrowed, something blue…”. While reading an article on the origins of the US highway system, I thought about this rhyme in relation to the concept of innovation. Part of that article related the US Army’s interest in highways as a means of moving troops, etc.:

When the war ended in 1919, the Army sent an expedition from Camp Meade, Maryland, to San Francisco to study the feasibility of moving troops and equipment by truck, and it was a disaster. At an average speed of seven miles per hour, the trip took two months, and many of the vehicles broke down along the way. The rough journey convinced the officer in charge—a young Dwight D. Eisenhower—that impassable roads were a risk to national security and that building good roads should be the highest priority for the nation.

During World War II, Eisenhower’s views were confirmed by the Allies’ use of the autobahns in the invasion of Germany (ironically, the German military had been much less enthusiastic about their military potential, preferring the older, more established railroad system). Eisenhower would go on to spearhead the construction of the Interstate Highway System when elected President after the war.

While the construction and use of high-quality roads for military purposes was an innovation, it certainly wasn’t a new development. The Romans had “been there, done that” long before. It became innovative again due to the fact that the social and technological context had shifted, making it more effective than the previous transport innovation, railroads.

There’s an old saying that “history does not repeat itself, but it rhymes”. Recognizing when something old has regained its relevance can be a path to innovation. In my post “Innovation on Tap”, I talked about Amazon’s plan to open physical bookstores and embrace the concept of “showrooming”, something that traditional retailers have been struggling with for years. Other companies are jumping aboard. What I find interesting is that this concept was the business model of a company headquartered in my home town. They operated from 1957 to 1997, when they went bankrupt. Now, less than twenty years later, that model is seen as innovative. Once again, it’s the shift in social and technological contexts that has changed it into an effective solution. It’s the value that makes it innovative.

This is part 17 of an ongoing conversation with Greger Wikstrand on the topic of innovation.

[Roman Road image by Phr61 via Wikimedia Commons.]

“Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?” on Iasa Global

In my part of the world, it’s not uncommon for people to say that someone wouldn’t recognize something if it “bit them in the [rude rump reference]”. For many organizations, that seems to be the explanation for the state of their enterprise IT architecture. For while we might claim to understand terms like “design”, “encapsulation”, and “separation of concerns”, the facts on the ground fail to show it. Just as software systems can degenerate into a “Big Ball of Mud”, so too can the systems of systems comprising our enterprise IT architectures. If we look at organizations as the systems they are, it should be obvious that this same entropy can infect the organization as well.

See the full post on the Iasa Global site (a re-post, originally published here).

“Microservice Mistakes – Complexity as a Service” on Iasa Global

I’m pleased to announce that I’ve been asked to continue as a contributor to the Iasa Global site. I’m planning to post original content there on at least a monthly basis. In the interim, please enjoy a re-post of “Microservice Mistakes – Complexity as a Service”

Who Needs Architects? – Are You Committed to Reaching Your Goals?

Two wedding rings

A good way to avoid writer’s block is to have a smart and engaged readership. They tend to ask questions that keep you on your toes. Thomas Cagley did the honors on my last post dealing with separation of concerns and the need for architectural design up to the level of enterprise IT architecture:

Is there an approach you would suggest to ensure this type of thinking occurs when it really matters rather than a reaction to problems?

I responded with:

The short answer is commitment, which is best demonstrated by having goals (both in terms of process and in terms of application, solution, and enterprise IT architecture) and having people responsible for making sure those goals are accounted for in day to day operations.

The long answer will be this week’s post. 🙂

“Accidental architecture”, the organic, undirected evolution of architecture, describes the state of many organizations today. This state of affairs exists across the entire range of application, solution, and enterprise IT architectures. Without coherent goals and a plan of how the parts will work together toward these common goals, how could it be otherwise? As Tom Graves noted in “Governance is not an end in itself”:

…things work better when they work together, on purpose.

Purposeful governance is needed to direct efforts towards the same goals. Not because people need to be directed to perform, but because people need to know what is valued to direct their own performance in a way that benefits the organization as a whole. Purposeful governance is the opposite of governance for the sake of being able to say we have governance (Tom again):

…governance should never be ‘an end in itself’. Instead, governance exists solely to support a business need – or, more specifically, to keep things on track towards that business-need.

Anyone can get caught by surprise, even under the best of circumstances, but failing to plan practically ensures that problems will arise. Returning to a policy of neglect as soon as a brushfire is put out effectively ensures that another fire (and likely one from the same source) will break out. Effective governance (IT and otherwise) requires commitment: commitment to determine a direction towards organizational goals, commitment to follow through with action to achieve those goals and commitment to monitor that the goals remain the same and the direction continues to point towards those goals. This requires full time commitment to and ownership of those considerations.

Would you ride with a driver that wouldn’t commit to paying attention to the road unless there was a problem and only until the problem was “solved”?

Effective governance requires commitment to collaboration. Just as those responsible for the solution and application architectures need direction to mesh with the IT architecture of the enterprise, those responsible for the solution and enterprise IT architectures need the feedback of those responsible for the application architectures. Failure to listen can lead to catastrophe as can failure to achieve engagement:

As I noted above, many organizations have an “accidental architecture”. There are many reasons for why this is where they are now. The more important question is are they willing to remain there or will they make the commitment to take control of their future?

Microservices – Sharpening the Focus

Motion Blurred London Bus

While it was not the genesis of the architectural style known as microservices, the March 2014 post by James Lewis and Martin Fowler certainly put it on the software development community’s radar. Although the level of interest generated has been considerable, the article was far from an unqualified endorsement:

Despite these positive experiences, however, we aren’t arguing that we are certain that microservices are the future direction for software architectures. While our experiences so far are positive compared to monolithic applications, we’re conscious of the fact that not enough time has passed for us to make a full judgement.

One reasonable argument we’ve heard is that you shouldn’t start with a microservices architecture. Instead begin with a monolith, keep it modular, and split it into microservices once the monolith becomes a problem. (Although this advice isn’t ideal, since a good in-process interface is usually not a good service interface.)

So we write this with cautious optimism. So far, we’ve seen enough about the microservice style to feel that it can be a worthwhile road to tread. We can’t say for sure where we’ll end up, but one of the challenges of software development is that you can only make decisions based on the imperfect information that you currently have to hand.

In the course of roughly fourteen months, Fowler’s opinion has gelled around the “reasonable argument”:

So my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.

This mirrors what Sam Newman stated in “Microservices For Greenfield?”:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

In short, the application architectural style known as microservice architecture (MSA), is unlikely to be an appropriate choice for the early stages of an application. Rather it is a style that is most likely migrated to from a more monolithic beginning. Some subset of applications may benefit from that form of distributed componentization at some point, but distribution, at any degree of granularity, should be based on need. Separation of concerns and modularity does not imply a need for distribution. In fact, poorly planned distribution may actually increase complexity and coupling while destroying encapsulation. Dependencies must be managed whether local or remote.

This is probably a good point to note that there is a great deal of room between a purely monolithic approach and a full-blown MSA. Rather than a binary choice, there is a wide range of options between the two. The fractal nature of the environment we inhabit means that responsibilities can be described as singular and separate without their being required to share the same granularity. Monoliths can be carved up and the resulting component parts still be considered monolithic compared to an extremely fine-grained sub-application microservice and that’s okay. The granularity of the partitioning (and the associated complexity) can be tailored to the desired outcome (such as making components reusable across multiple applications or more easily replaceable).

The moral of the story, at least in my opinion, is that intentional design concentrating on separation of concerns, loose coupling, and high cohesion is beneficial from the very start. Vertical (functional) slices, perhaps combined with layers (what I call “dicing”), can be used to achieve these ends. Regardless of whether the components are to be distributed at first, designing them with that in mind from the start will ease any transition that comes in the future without ill effects for the present. Neglecting these issues, risks hampering, if not outright preventing, breaking them out at a later date without resorting to a re-write.

These same concerns apply higher levels of abstraction as well. Rather than blindly growing a monolith that is all things to all people, adding new features should be treated as an opportunity to evaluate whether that functionality coheres with the existing application or is better suited to being a service from an external provider. Just as the application architecture should aim for modularity, so too should the solution architecture.

A modular design is a flexible design. While we cannot know up front the extent of change an application will undergo over its lifetime, we can be sure that there will be change. Designing with flexibility in mind means that change, when it comes, is less likely to be an existential crisis. As Hayim Makabee noted in his write-up of Rotem Hermon’s talk, “Change Driven Design”: “Change should entail extending the system rather than refactoring.”

A full-blown MSA architecture is one possible outcome for an application. It is, however, not the most likely outcome for most applications. What is important is to avoid unnecessary constraints and retain sufficient flexibility to deal with the needs that arise.

[London Bus Image by E01 via Wikimedia Commons.]

Form Follows Function on SPaMCast 331

SPaMCAST logo

I’m back for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

SPaMCast 331 features Tom on Agile Coaching, a discussion of my “Microservices vs SOA – Is there any real difference?” post and an installment of Jo Ann Sweeny’s column, “Explaining Communication”, dealing with communication channels.

Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?

Surveying and drafting instruments and examples

In my part of the world, it’s not uncommon for people to say that someone wouldn’t recognize something if it “bit them in the [rude rump reference]”. For many organizations, that seems to be the explanation for the state of their enterprise IT architecture. For while we might claim to understand terms like “design”, “encapsulation”, and “separation of concerns”, the facts on the ground fail to show it. Just as software systems can degenerate into a “Big Ball of Mud”, so too can the systems of systems comprising our enterprise IT architectures. If we look at organizations as the systems they are, it should be obvious that this same entropy can infect the organization as well.

One symptom of this entropy is when the dividing lines blur, weakening or even removing constraints. While the term “constraint” commonly has a negative connotation, constraints provide the structure and definition of a system. Separation of concerns, encapsulation, and DRY are all constraints that are intended to provide benefit. We accept limits on concerns addressed, accessibility of internals and/or instances of code or data in order to reduce complexity, not just check off a philosophical box. If we remove or even just relax these types of constraints too much, we incur risk.

This blurring of lines can occur at any level and on multiple levels. Additionally, architectural weakness at a higher level of abstraction can negate strengths at lower levels. A collection of well-designed systems will not ensure a coherent enterprise IT architecture if there is overlap and redundancy without a clear understanding of which ones are authoritative. Accidental architecture is no more likely to work at higher levels of abstraction than lower ones.

Architectural design, at each level of granularity, should be intentional and appropriate to that level. The ideal, is not to over-regulate, but to strike a balance. Micromanaging internals wastes effort better spent on something beneficial; abdicating design responsibility practically guarantees chaos. An additional consideration is the fit between the human and technological aspects. Conway’s law is more than just an observation, it can be used as a tool to align applications to a specific business concern as well as aligning development teams to specific applications/application components.

Just as a carver takes note of the grain of the wood being shaped, so should an architect work with rather than against the grain of the organization.

Jessica Kerr’s post, “Microservices, Microbusinesses”, captures these concepts from the viewpoint of microservice architectures. Partitioning application concerns into microservices allows for internal flexibility at the cost of external conformance to necessary governance. As Kerr puts it, “…everybody is a responsible adult”:

That’s a lot of overhead and glue code. Every service has to do translation from input to its internal format, and then to whatever output format someone else requires. Error handling, caching or throttling, failover and load balancing and monitoring, contract testing, maintaining multiple interface versions, database interaction details. Most of the code is glue, layers of glue protecting a small core of business logic. These strong boundaries allow healthy relationships with other services, including new interactions that weren’t designed into the architecture from the beginning. For all this work, we get freedom on the inside.

Kerr also recognizes the applicability of this trade-off to the architecture of the organization:

Still, a team exists as a citizen inside a larger organization. There are interfaces to fulfill. Management really does need to know about progress. Outward collaboration is essential. We can do this the same way our code does: with glue. Glue made of people. One team member, taking the responsibility of translating cards on the wall into JIRA, can free the team to optimize communication while filling management’s strongest needs.

Management defines an API. Encapsulate the inner workings of the team, and expose an interface that makes sense outside. By all means, provide a reference implementation: “Other teams use Target Process to track work.” Have default recommendations: “We use Clojure here unless there’s some better solution. SQL Server is our first choice database for these reasons.” Give teams a strong process and technology stack to start from, and let them innovate from there.

“Good fences make good neighbors” not by keeping out, but by channeling traffic into commonly understood and commonly accepted directions. We recognize lines so as to influence those aspects we truly need to influence. More importantly, we recognize lines to prevent needless conflict and waste. The key is to draw the lines so that they work for us, not against us.

Form Follows Function on SPaMCAST 327

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

SPaMCast 327 features Tom on standups, a discussion of my “Who Needs Architects? – Navigating the Fractals” post and an installment of Jo Ann Sweeny’s column, “Explaining Communication”.

Cheers!