Form Follows Function on SPaMCast 459

SPaMCAST logo

I’m back for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 459, features Tom’s essay on resistance to change. This is followed by our Form Follows Function segment discussing my post “Innovation, Intention, Planning and Execution”. Jeremy Berriault‘s QA corner finishes the cast with a segment on testing packaged software.

In this installment, Tom and I talk about effectiveness, particularly the relationship between effectiveness and reasoned, intentional action. In short, organizations are (social) systems, and “things work better when they work together on purpose”. You can’t create serendipity, but if you want to be able to exploit what serendipity drops in your lap, you need to prepare the ground ahead of time.

You can find all the SPaMCast episodes I’m in under the SPaMCast Appearances category on this blog. Enjoy!

Form Follows Function on SPaMCast 450

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 450, features Tom’s excellent essay on roadmaps and a Form Follows Function installment based on my post “Holistic Architecture – Keeping the Gears Turning”.

Our conversation in this episode continues with the organizations as system concept, this time from the standpoint of how the social system impacts (often negatively) the software systems the social systems rely on. Specifically, we talk about how an organization that fails to manage itself as a system can lead to an architecture of both the enterprise and its IT that resembles “spare parts flying in formation”. It’s not a good situation, no matter how well made those spare parts are!

You can find all my SPaMCast episodes using under the SPaMCast Appearances category on this blog. Enjoy!

Holistic Architecture – Keeping the Gears Turning

Gears Turning Animation

In last week’s post, “Trash or Treasure – What’s Your Legacy?”, I talked about how to define “legacy systems”. Essentially, as the divergence grows between the needs of social systems and the fitness for purpose of the software systems that enable them, the more likely that those software systems can considered “legacy”. The post attracted a few comments.

I love comments.

It’s nearly impossible to have writers’ block when you’ve got smart people commenting on your work and giving you more to think about. I got just that courtesy of theslowdiyer. The comment captured a critical point:

Agree that ALM is important, and actually also for a different reason – a financial one:

First of all, the cost of operating the system though the full Application Life Cycle (up to and including decommissioning) needs to be incorporated in the investment calculation. Some organisations will invariably get this wrong – by accident or by (poor) design (of processes).

But secondly (and this is where I have seen things go really wrong): If you invest a capability in the form of a new system then once that system is no longer viable to maintain, you probably still need the capability. Which means that if you are adding new capabilities to your system landscape, some form of accruals to sustain the capability ad infinitum will probably be required.

The most important thing is the capability, not the software system.

The capability is an organizational/enterprise concern. It belongs to the social systems that comprise the organization and the over-arching enterprise. This is not to say that software systems are not important – lack of automation or systems that have slipped into the legacy category can certainly impede the enterprise. However, without the enterprise, there is no purpose for the software system. Accordingly, we need to keep our focus centered on the key concern, the capability. So long as the capability is important to enterprise, then all the components, both social and technical, need to be working in harmony. In short, there’s a need for cohesion.

Last fall, Grady Booch tweeted:

Ruth Malan replied with a great illustration of it from her “Design Visualization: Smoke and Mirrors” slide deck:

Obviously, no one would want to fly on a plane in that state (which illustrates the enterprise IT architecture of too many organizations). The more important thing, however, is that even if the plane (the technical architecture of the enterprise) is perfectly cohesive, if the social system maintaining and operating it is similarly fractured, it’s still unsafe. If I thought that pilots, mechanics, and air traffic controllers were all operating at cross purposes (or at least without any thought of common cause), I’d become a fan of travel by train.

Unfortunately, for too many organizations, accidental architecture is the most charitable way to describe the enterprise. Both social and technical systems have been built up on an ad hoc basis and allowed to evolve without reference to any unifying plan. Technical systems tend to be built (and worse, maintained) according to project-oriented mindset (aka “done and run”) leading to an expensive cycle of decay, then fix. The social systems can become self-perpetuating fiefs. The level of cohesion between the two, to the extent that it existed, breaks down even more.

A post from Matt Balantine, “Garbage In” illustrates the cohesion issue across both social and technical systems. Describing an attempt to analyze spending data across a large organization composed of federated subsidiaries:

The theory was that if we could find the classifications that existed across each of the organisations, we could then map them, Rosetta Stone-like, to a standard schema. As we spoke to each of the organisations we started to realise that there may be a problem.

The classification systems that were in use weren’t being managed to achieve integrity of data, but instead to deliver short-term operational needs. In most cases the classification was a drop-down list in the Finance system. It hadn’t been modelled – it just evolved over time, with new codes being added as necessary (and old ones not being removed because of previous use). Moreover, the classifications weren’t consistent. In the same field information would be encapsulated in various ways.

Even in more homogeneous organizations, I would expect to find something similar. It’s extremely common for aspects of one capability to bear on others. What is the primary concern for one business unit may be one of many subsidiary concerns for another (see “Making and Taming Monoliths” for an illustrated example). Because of the disconnected way capabilities (and their supporting systems) are traditionally developed, however, there tends to be a lot of redundant data. This isn’t necessarily a bad thing (e.g. a cache is redundant data maintained for performance purposes). What is a bad thing is when the disconnects cause disagreements and no governance exists to mediate the disputes. Not having an authoritative source is arguably worse than having no data at all since you don’t know what to trust.

Having an idea of what pieces exist, how they fit together, and how they will evolve while remaining aligned is, in my opinion, critical for any system. When it’s a complex socio-technical system, this awareness needs to span the whole enterprise stack (social and technical). Time and effort spent maintaining coherence across the enterprise, rather than detracting from the primary concerns will actually enhance them.

Are you confident that the plane will stay in the air or just hoping that the wing doesn’t fall off?

Form Follows Function on SPaMCast 446

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 446, features Tom’s essay on questions, a powerful tool for coaches and facilitators. A Form Follows Function installment based on my post “Go-to People Considered Harmful” comes next and Kim Pries rounds out the podcast with a Software Sensei column on servant leadership.

Our conversation in this episode continues with the organizations as system concept and how concentrating institutional knowledge in go-to people creates a dependency management nightmare. Social systems run on relationships and when we allow knowledge and skill bottlenecks to form, we set our organization up for failure. Specialists with deep knowledge are great, but if they don’t spread that knowledge around, we risk avoidable disasters when they’re unavailable. Redundancy aids resilience.

You can find all my SPaMCast episodes using under the SPaMCast Appearances category on this blog. Enjoy!

Form Follows Function on SPaMCast 442

SPaMCAST logo

A new month brings a new appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 442, features Tom’s excellent essay on capability teams (highly recommended!), followed by a Form Follows Function installment based on my post “Systems of Social Systems and the Software Systems They Create”. Kim Pries bats cleanup with a Software Sensei column, “Software Quality and the Art of Skateboard Maintenance”.

In this episode, Tom and I continue our discussion on the organizations as system concept and how systems must fit into their context and ecosystem. In my previous posts on the subject, I took more of a top down approach. With this post, I flipped things around to a bottom up view. Understanding how the social and software systems interact (including the social system involved in creating/maintaining the software system) is critical to avoid throwing sand in the gears.

You can find all my SPaMCast episodes using under the SPaMCast Appearances category on this blog. Enjoy!

Stopping Accidental Technical Debt

Buster Keaton looking at a poorly constructed house

In one of my earlier posts about technical debt, I differentiated between intentional debt (that taken on deliberately and purposefully) and accidental debt (that which just accrues over time without rhyme or reason or record). Dealing with (in the sense of evaluating, tracking, and resolving it) technical debt is obviously a consideration for someone in an application architect role. While someone in that role absolutely should be aware of the intentional debt, is there a way to be more attuned to the accidental debt as well?

Last summer, I published a post titled “Distance…is the one true enemy…”. The post started with a group of tweets from Gregory Brown talking about the corrosive effects of distance on software development (distance between compile and run, between failure and correction, between development and feedback, etc.). I then extended the concept to management, talking about how distance between sense-maker and decision-maker could negatively affect the quality of the decisions being made.

There’s also a distance that neither Greg nor I covered at the time, design distance. Design distance is the distance between the design and the outcome. Reducing design distance makes it easier to keep a handle on the accidental debt as well as the intentional.

Distance between the architectural decisions and the implementation can introduce technical debt. This distance can come from remote decision-makers, architecture pigeons who swoop in, deposit their “wisdom”, and then fly away home. It can come from failing to communicate the design considerations effectively across the entire team. It can also come from failing to monitor the system as it evolves. The design and the implementation need to be in alignment. Even more so, the design and the implementation need to align with particular problems to be solved/jobs to be done. Otherwise, the result may look like this:

Distance between development of the system and keeping the system running can introduce technical debt as well. The platform a system runs on is a vital part of the system, as critical as the code it supports. As with the code, the design, implementation, and context all need to be kept in alignment.

Alignment of design, implementation, and context can only be maintained by on-going architectural assessment. Stefan Dreverman’s “Using Philosophy in IT architecture” identified four questions to be asked as part of an assessment:

  1. “What is my purpose?”
  2. “What am I composed of?”
  3. “What’s in my environment?”
  4. “What do I communicate?”

These questions are applicable not only to the beginning of a system, but throughout its life-cycle. Failing to re-evaluate the architecture as a whole as the system evolves can lead to inconsistencies as design distance grows. We can get so busy dealing with the present that we create a future of pain:

https://twitter.com/jetpack/status/854661368611471360

At first glance, this approach might seem to be expensive, but rewriting legacy systems is expensive as well (assuming the rewrite would be successful, which is a tenuous assumption). Building applications with a one-and-done mindset is effectively building a legacy system.

Form Follows Function on SPaMCast 438

SPaMCAST logo

Once again, I’m making an appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 438, features Tom’s essay on using sizing for software testing, Kim Pries with a Software Sensei column (canned solutions), and a Form Follows Function installment based on my post “Organizations as Systems and Innovation”.

In this episode, Tom and I discuss how systems must fit into their context and ecosystem, otherwise it can be like dropping a high-performance sports car engine into a VW Beetle. Disney-physics may work in the movies, but it’s unlikely to be successful in the real world. If all the parts don’t fit together, friction ensues.

You can find all my SPaMCast episodes using under the SPaMCast Appearances category on this blog. Enjoy!

You can’t always get what you want…

Lucifer

You can’t always get what you want
But if you try sometimes well you just might find
You get what you need

When it comes to systems, you can’t always get what you want, but you do get what you design (intentionally or not), whether it’s what you need or not.

In other words, the architecture of systems, both social and software, evolve through some combination of intentional design and accidental emergence. Regardless of which end of the continuum the system leans toward, the end result will reflect the decisions made (or not made) in relation to various stimuli. Regarding businesses (a social system), Ruth Malan, in her February 2012 “Trace in Sand” post, put it this way:

I have been talking about agility in terms of evolutionary ecology, but with the explicit recognition that companies, comprised after all of individuals, attempt to speed and alter and intervene and interject and intercede and (I’m looking for the right word here) shape evolutionary processes with intentional actions — concerted, but also emergent from more and less choreographed, actions and intentions. Being bumped along by the unpredictable interactions of others, some from within, but also from “invasive species” from other ecosystems looking for new applications for their adaptable, mutable capability set.

Organizations create and participate in business ecologies. They build up the relationships that stabilize parts of the broader ecosystem, and create conditions for organizational forms to thrive there. They create products, they create the seeds of the next generation of harvest. They produce variants on their family tree, to target and develop niches.

Ruth further notes that while business adapt and improve in some cases, in other case they have “…become too closely adapted to and integrated within an ecosystem that has been replaced or significantly restructured by some landscape reshaping change…”. People generally refer to this phenomenon as disruption and the way they refer to it would seem to imply that it’s something that happens to or is done to a company. The role of the organization in its own difficulties (or demise) isn’t, in my opinion, well understood.

Last Friday, I saw a tweet from Noah Sussman that provides a useful heuristic for predicting the behavior of any large social system:

the actual reason behind the behavior:

It’s not that people are actively working to harm the organization, but that when there is no leadership, where there is no design, where there is no learning, the system ossifies and breaks down. Being perfectly adapted to an ecosystem that no longer exists is indistinguishable from being poorly adapted to the present context. I’m reminded of what Tom Graves stated in “The game of enterprise-architecture”: “things work better when they work together, on purpose”.

Without direction, entropy emerges where coherence is needed.

This is not to say, however, that micro-management is the answer. Too much design/control is as toxic as too little. This is particularly the case when the management system isn’t intentionally designed. Management that is both ad hoc and rigid can cause new problems while trying to solve existing ones. This is illustrated by another tweet from Friday:

The desire to avoid the “…embarrassment of cancellation” led to the decision to risk the lives of a plane load of “…foreign TV and radio journalists and also other foreign notables…”. I suspect that the fiery deaths of those individuals would have been an even bigger embarrassment. The system, however, led to the person who had the decision-making power to take that gamble.

The system works the way you built it, even when you didn’t intend to build it to work that way.

Emergence: Babies and Bathwater, Plans and Planning

blueprints

 

“Emergent” is a word that I run into from time to time. When I do run into it, I’m reminded of an exchange from the movie Gallipoli:

Archy Hamilton: I’ll see you when I see you.
Frank Dunne: Yeah. Not if I see you first.

The reason for my ambivalent relationship with the word is that it’s frequently used in a sense that doesn’t actually fit its definition. Dictionary.com defines it like this:

adjective

1. coming into view or notice; issuing.
2. emerging; rising from a liquid or other surrounding medium.
3. coming into existence, especially with political independence: the emergent nations of Africa.
4. arising casually or unexpectedly.
5. calling for immediate action; urgent.
6. Evolution. displaying emergence.

noun

7. Ecology. an aquatic plant having its stem, leaves, etc., extending above the surface of the water.

Most of the adjective definitions apply to planning and design (which I consider to be a specialized form of planning). Number 3 is somewhat tenuous for that sense and and 5 only applies sometimes, but 6 is dead on.

My problem, however, starts when it’s used as a euphemism for a directionless. The idea that a cohesive, coherent result will “emerge” from responding tactically (whether in software development or in managing a business) is, in my opinion, a dangerous one. I’ve never heard an explanation of how strategic success emerges from uncoordinated tactical excellence that doesn’t eventually come down to faith. It’s why I started tagging posts on the subject “Intentional vs Accidental Architecture”. Success that arises from lack of coordination is accidental rather than by design (not to mention ironic when the lack of intentional coordination or planning/design is intentional itself):

If you don’t know where you are going, any road will get you there.

 

The problem, of course, is do you want to be at the “there” you wind up at? There’s also the issue of cost associated with a meandering path when a more direct route was available.

None of this, however, should be taken as a rejection of emergence. In fact, a dogmatic attachment to a plan in the face of emergent facts is as problematic as pursuing an accidental approach. Placing your faith in a plan that has been invalidated by circumstances is as blinkered an approach as refusing to plan at all. Neither extreme makes much sense.

We lack the ability to foresee everything that can occur, but that limit does not mean that we should ignore what we can foresee. A purely tactical focus can lead us down obvious blind alleys that will be more costly to back out of in the long run. Experience is an excellent teacher, but the tuition is expensive. In other words, learning from our mistakes is good, but learning from other’s mistakes is better.

Darwinian evolution can produce lead to some amazing things provided you can spare millions of years and lots of failed attempts. An intentional approach allows you to tip the scales in your favor.


Many thanks to Andrew Campbell and Adrian Campbell for the multi-day twitter conversation that spawned this post. Normally, I unplug from almost all social media on the weekends, but I enjoyed the discussion so much I bent the rules. Cheers gentlemen!

Go-to People Considered Harmful

Neck of Codd bottle

Okay, so the title’s a little derivative, but it’s both accurate and it fits in with the “organizations as systems” theme of recent posts. Just as dependency management is important for software systems, it’s likewise just as critical for social systems. Failures anywhere along the chain of execution can potentially bring the whole system to a halt if resilience isn’t considered in the design (and evolution) of the system.

Dependency issues in social systems can take a variety of forms. One that comes easily to mind is what is referred to as the “bus factor” – how badly the team is affected if a person is lost (e.g. hit by a bus). Roy Osherove’s post from today, “A Critical Chain of Bus Factors”, expands on this. Interlocking chains of dependencies can multiply the bus factor:

A chain of bus factors happens when you have bus factors depending on bus factors:

Your one developer who knows how to configure the pipeline can’t test the changes because the agent is down. The one guy in IT who has access to the agent needs to reboot it, but does not have access. The one person who has access to reboot it (in the Infra team) is sick, so now there are three people waiting, and there is nothing in this earth that can help that situation.

The “bus factor”, either individually or as a cascading chain, is only part of the problem, however. A column on CIO.com, “The hazards of go-to people”, identifies the potential negative impacts on the go-to person:

They may:

  • Resent that they shoulder so much of the burden for the entire group.
  • Feel underpaid.
  • Burn out from the stress of being on the never-ending-crisis treadmill.
  • Feel trapped and unable to progress in their careers since they are so important in the role that they are in.
  • Become arrogant and condescending to their peers, drunk with the glory of being important.

The same column also lists potential problems for those who are not the go-to person:

When they realize that they are not one of the go-to people they might:

  • Feel underappreciated and untrusted.
  • Lose the desire to work hard since they don’t feel that their work will be recognized or rewarded.
  • Miss out on the opportunities to work on exciting or important things, since they are not considered dedicated and capable.
  • Feel underappreciated and untrusted.

A particularly nasty effect of relying on go-to people is that it’s self-reinforcing if not recognized and actively worked against. People get used to relying on the specialist (which is, admittedly, very effective right up until the bus arrives) and neglect learning to do for themselves. Osherove suggests several methods to mitigate these problems: pairing, teaching, rotating positions, etc. The key idea being, spreading the knowledge around.

Having individuals with deep knowledge can be a good thing if they’re a reservoir supplying others and not a pipeline constraining the flow. Intentional management of dependencies is just as important in social systems as in software systems.