Pragmatic Application Architecture

I saw a tweet on Friday about a SlideShare deck that looked interesting, so I bookmarked it to read later. As I was reading it this morning, I found myself agreeing with the points being made. When I got to the next to the last slide, I found myself (or at least, this blog) listed alongside some very distinguished company under “Reading Material”.

Thanks, Bart Blommaerts and nice job!

Advertisement

Organizations and Innovation – Swim or Die!

Shark Mural

One of the few downsides to being a Great White shark is that they must continually move, even while sleeping, in order to keep water moving over their gills. If they stay still too long, they die. Likewise, organizations must remain in motion, changing and adapting to their ecosystem, or risk dying out as well.

Greger Wikstrand and I have been carrying on a discussion about architecture, innovation, and organizations as systems. Here’s the background so far:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.

The over-riding them of the conversation is that innovation doesn’t happen in a vacuum. The software systems that people tend to think about when talking innovation are irrelevant outside of the context of the organizations that use them to innovate. Those organizations are irrelevant outside of the context they operate in, their ecosystem. A great organization with cutting edge technology that has no market is not long for this world. This fractal nature of social systems using software systems within an overarching ecosystem is why I find enterprise architecture increasingly relevant to software architecture. I use “increasingly” as in my awareness of its importance is increasing; the relevance has always been there. Matt Ballantine’s post, “Down Pit”, points this out in the context of Britain’s post-WWII coal mining industry.

Like the sharks, organizations cannot stay immobile and expect to live. Unlike biological organisms, organizations have a far greater degree of control over their internal structures and processes to enable them to adapt to their environment. Whether it’s referred to as Enterprise Architecture or not, there is a need for organizations to understand the context in which they operate. Adapting to this context requires both technological and social (structural and cultural) flexibility.

Unlike the sharks that evolved to “sleep swim”, organizational adaptation must be intentional, considered, and ongoing. Rather than formulaic responses, which tend to jump to the ends of spectra, tailoring is needed to find the right mix. This is one concept that I neglected to fully develop in “Fixing IT – Too Big to Succeed?”, too distributed is likely to be as bad as too centralized. Some aspects of IT need to be centralized to be effective whereas others need to be federated to be effective. Not all embedded teams will be at the same organizational scope. Pragmatism is required as well. Blindly adhering to the rule that each application have a dedicated permanent product team means that the company will either go bankrupt or only units that can afford a team will have applications. A middle ground, while not doctrinally pure, is both possible and preferable.

It’s not enough to just swim, you must swim with purpose. Otherwise, all that emerges is entropy.

#NoEstimates – Questions, Answers, and Credibility

This is questioning?

A recent episode of Thomas Cagley’s SPAMCast featured Woody Zuill discussing #NoEstimates. During the episode, Woody talked about his doubts about the usefulness of estimating and the value of questioning processes rather than accepting them blindly.

I’m definitely a fan of pragmatism. Rather than relying on dogma, I believe we should be constantly evaluating our practices and improving those found wanting. That being said, to effectively evaluate our practices, we need to be able to frame the evaluation. It’s not enough to pronounce something is useful or not, we have to be able to say who is involved, what they are seeking, what the impact is on them, and to what extent the outcome matches up with what they are seeking. Additionally, being able to reason about why a particular practice tends to generate a particular outcome is critical to determining what corrective action will work. In short, if we don’t know what the destination looks like, we will have a hard time steering toward it.

In his blog post “Why do we need estimates?”, Woody (rightly, in my opinion) identifies estimates as a tool used for decision-making. He also lists a number of decisions that estimates can be used to make:

  • To decide if we can do a project affordably enough to make a profit.
  • To decide what work we think we can do during “a Sprint”. (A decision about amount)
  • To decide what work we should try to do during “a Sprint”. (A decision about priority or importance)
  • To decide which is more valuable to us: this story or that story.
  • To decide what project we should do: Project A or Project B.
  • To decide how many developers/people to hire and how fast to “ramp up”.
  • To decide how much money we’ll need to staff a team for a year.
  • To give a price, or an approximate cost so a customer can decide to hire us to do their project.
  • So we can determine the team’s velocity.
  • So marketing can do whatever it is they do that requires they know 6 months in advance exactly what will be in our product.
  • Someone told me to make an estimate, I don’t use them for anything.

What is missing, however, is an alternate way to make these decisions. It’s also missing from the follow up post “My Customers Need Estimates, What Do I do?”. If the customer has a need, does it seem wise to ask them to abandon (not amend, but abandon) a technique without proposing something else? Even “let’s try x instead of y” is insufficient if we can’t logically explain why we expect “x” to work better than “y”. The issue is one of credibility, a matter of trust.

In her post “What Creates Trust in Your Organization?”, Johanna Rothman related her technique for creating trust:

Since then, I asked my managers, “When do you want to know my project is in trouble? As soon as it I think I’m not going to meet my date; after I do some experiments; or the last possible moment?” I create trust when I ask that question because it shows I’m taking their concerns seriously.

After that project, here is what I did to create trust:

  1. Created a first draft estimate.
  2. Tracked my work so I could show visible progress and what didn’t work.
  3. Delivered often. That is why I like inch-pebbles. Yes, after that project, I often had one- or two-day deliverables.
  4. If I thought I wasn’t going to make it, use the questions above to decide when to say, “I’m in trouble.”
  5. Delivered a working product.

While I can’t say that Johanna’s technique is the optimal one for all situations, I can at least explain why I can put some faith in it. In my experience, transparency, collaboration, and a respect for my stakeholders’ needs tends to work well. Questions without answers? Not so much.

Are Microservices the Next Big Thing?

Bandwagon

The question popped up in a LinkedIn discussion of my last post:

“A question for you, are microservices the next big thing?”

It was a refreshing change of pace to actually be asked about my opinion rather than be told what it is. The backlash against microservices is such that, anything less that outright condemnation is seen as “touting” the latest fad (this in spite of having authored posts such as “Microservice Architectures aren’t for Everyone”, “Microservice Mistakes – Complexity as a Service”, and “Microservices – The Too Good to be True Parts”). Reflex reactions like this will most likely be so simplistic as to be useless, regardless of whether pro or con.

Architectural design is about making decisions. Good design, in my opinion, is about making decisions justified by the context at hand rather than following a recipe. It’s about understanding the situation and applying principles rather than blindly replicating a pattern. No one cares about how cutting edge the technology is, they care that the solution solves the problem without bankrupting them.

So, to return to the question of whether I think microservices are “the next big thing”:

They shouldn’t be, but they might be. Shouldn’t, because a full-blown microservice architecture app, while perfectly appropriate for Netflix, isn’t likely to be appropriate for most in-house corporate applications. They won’t get a benefit because of the additional operational and development complexity. That being said, lots of inappropriate SOA initiatives sucked up a lot of money not that long ago – if software development had a mascot it would be an immortal lemming with amnesia.

The principles behind MSA, however, have some value IMHO. When a monolithic architecture begins to get in the way, those principles can provide some guidance on how to carve it up. The wonderful thing about SRP is its fractal nature, so you split out responsibilities at a level of granularity that’s appropriate for your application’s situation when it makes sense. There’s no rule that you have to carve up everything at once or that you have to slice it as thin as they do at Netflix.

That’s why my posts on the subject tend to sway back and forth. It’s not a recipe for the “right way” (as if there could be one right way) to design applications, it’s merely another set of ideas that, depending on the context, could help or hurt.

It’s not the technique itself that makes or breaks a design, it’s how applicable the technique is to problem at hand.

#4U2U – Canned Competency, Values & Pragmatism

home canned food

Not quite two years ago, I put up a quick post entitled “The Iron Law of Tools”, which in its essence was: “that which does for you, can do to you” (whence comes the #4U2U in the title of this post). That particular post focused on ORMs (Entity Framework to be specific), but the warning equally applies to libraries and frameworks for other technical issues as well as process, methodology and techniques.

Libraries, frameworks, and processes (“tools” from this point forward) can make things easier by allowing you to concentrate on what to do rather than how to do it (via high-level abstractions and/or conventions). However, tools are not a substitute for understanding. Neither the Law of Unintended Consequences nor Murphy’s Law have been repealed. Without an adequate understanding of how something works, you cannot assess the costs of the trade-offs that are being made (and there are trade-offs involved, you can rely on that). Understanding is likewise necessary to recognize and fix those situations where the tool causes more problems than it solves. As Pawel Brodzinski observed in his post “A Fool With a Tool Is Still a Fool”:

Any time a discussion goes toward tools, any tools really, it’s a good idea to challenge the understanding of a tool itself and principles behind its successes. Without that shared success stories bear little value in other contexts, thus end result of applying the same tools will frequently result in yet another case of a cargo cult. While it may be good for training and consulting businesses (aka prophets) it won’t help to improve our organizations.

A fool with a tool will remain a fool, only more dangerous since now they’re armed.

Pawel’s point regarding cargo cults is particularly important. Lack of understanding of how a particular effect proceeds from a given cause often manifests as dogmatic assertions in defense of some “universal truth”. The closest thing I’ve found to a universal truth of software development is that it’s very unlikely that anything is universally applicable to every context.

It’s dangerous to conflate adherence to a tool with one’s core values, such that anyone who disagrees is “wrong” or “deluded” or “unprofessional”. That being said, values can provide a frame of reference in understanding someone’s position in regard to a tool. In “The TDD Divide: Everyone is Right”, Cory House addresses the current dispute over Test-Driven Development and notes (rightly, in my opinion):

The world is a messy place. Deadlines loom, team skills vary widely, and the impact of bugs varies greatly by industry. Ultimately, we write software to make money and solve problems. Tests are a tool that help us do both. Consider the context to determine which testing style fits for your project.

Uncle Bob is right. Quality matters. Separation of concerns and unit testing help assure the utmost quality, speed, and flexibility.

DHH is right. Sometimes the cost of unit tests exceed their benefit. Some of the benefit of automated testing can be achieved through automated integration testing instead.

You need to understand what a tool offers and what it costs and the result of that equation in relation to what’s important in your context. With that understanding, you can make a rational choice.

Technical Debt & Quality – Binary Thinking in an Analog World

How many shades of gray?

I admit it, I’m a pragmatist.

Less than two weeks after starting this blog, I posted “There is no right way (though there are plenty of wrong ones)”, proclaiming my adherence to the belief that absolutes rarely stand the test of time. In design as well as development process, context is king.

Some, however, tend to take a more black and white approach to things. I recently saw an assertion that “Quality is not negotiable” and that “Only Technical Debt enthusiasts believe that”. By that logic, all but a tiny portion of working software professionals must be “Technical Debt enthusiasts”, because if you’re not the one paying for the work, then the decision about what’s negotiable is out of your hands. Likewise, there’s a difference between being an “enthusiast” and recognizing that trade-offs are sometimes required.

Seventeen years ago, Fast Company published “They Write the Right Stuff”, showcasing the quality efforts of the team working on the code that controlled the space shuttle. There results were impressive:

Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.

Impressive results are certainly in order given the criticality of the software:

The group writes software this good because that’s how good it has to be. Every time it fires up the shuttle, their software is controlling a $4 billion piece of equipment, the lives of a half-dozen astronauts, and the dreams of the nation. Even the smallest error in space can have enormous consequences: the orbiting space shuttle travels at 17,500 miles per hour; a bug that causes a timing problem of just two-thirds of a second puts the space shuttle three miles off course.

It should be noted, however, that while the bug rate is infinitesimally small, it’s still greater than zero. With a defined hardware environment, highly trained users, and a process that consumed a budget of $35 million annually, perfection was still out of reach. Reality often tramples over ideals, particularly considering that technical debt can arise from changing business needs and changing technical environments as much as sloppy practice. Recognizing that circumstances may make it the better choice and managing it is more realistic than taking a dogmatic approach.

For most products, it’s common to find multiple varieties with different features and different levels of quality with the choice left to the consumer as to which best suits his/her needs. It’s rare, and rightly so, for that value judgment to be taken out of the consumer’s hands. Taking the position that “quality is not negotiable” (with the implicit assertion that you are the authority on what constitutes quality) places you in just that position of dictating to your customer what is in their best interests. Under the same circumstance, what would be your reaction?