Conflicts, Collaboration, and Customers

With Ron Jeffries comment on my “Negotiating Estimates” post, I think we’ve reached agreement on a common problem, even if we disagree (perhaps?) over how to fix it. As I noted in my short answer, my position is that abuse of estimates stems from a deeper issue. Changing a practice won’t eliminate the abuse. Changing culture will. Changing that culture (where possible) involves convincing the players that it’s in their best interests to change, which it is.

This is my biggest objection to #NoEstimates. It makes an illogical leap that the practice is responsible for problems rather than some underlying dysfunction without showing causality. This putting the cart before the horse risks encouraging people to espouse action that is probably both ineffective and detrimental to their relationship with their customer. The likeliest outcome I see from that is more problems, not fewer.

The word “customer” was a point of contention:

It’s my position that, for better or for worse, there is a customer/provider relationship. Rather than attempt to deny that, it makes more sense, in my opinion, to concentrate on having the best possible customer/provider relationship. While poor customer service is extremely common, I’d hope everyone can think of at least one person they enjoy doing business with, where the relationship is mutually beneficial. I know I certainly can, which stands as my (admittedly unscientific) evidence that the relationship need not be adversarial. A collaborative, mutually beneficial relationship works out best for both parties.

“Collaborative”, however, does not mean conflict free. In his post “Against Estimate-Commitment”, Jeffries states:

The Estimate-Commitment relationship stands in opposition to collaboration. It works against collaboration. It supports conflict, not teamwork.

I both agree and disagree.

There will be conflict. Different team members will have different wants. Different stakeholders will have different wants. The key is not in preventing conflict, but resolving it in a way agreeable to all parties:

Recognizing, rather than repressing conflicts might even be beneficial:

What is critical, is that the relationship between the parties not be one-sided (or even be perceived as one-sided). In-house IT often has the peculiar characteristic of seeming one-sided to both customers and providers. The classic model of IT provision is, in my opinion, largely to blame. It seems almost designed to put both parties at odds.

The problem can be fixed, however, with cultural changes that encompass both IT and the business units it serves. One such change is moving from a project-centric to a product-centric model that aligns the interests of both groups. This alignment cements the relationship, making IT a valuable partner in the process rather than an obstacle to be overcome to get what’s wanted/needed. Relationships are key to success. By structuring the system so that each group’s incentives, conflicts and all, are aligned to a common goal, the system can be made to work. We’ve certainly had plenty of examples of what we get from the opposite situation.

Advertisement

Form Follows Function on SPaMCast 357

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on mind mapping, Steve Tendon discussing the TameFlow methodology, and a Form Follows Function installment on the need for architectural design.

In SPaMCast 357, Tom and I discuss my post “Who Needs Architects? Because YAGNI Doesn’t Scale”.

“Microservice Architectures aren’t for Everyone” on Iasa Global

Simon Brown says it nicely:

Architect Clippy is a bit more snarky:

In both cases, the message is the same: microservice architectures (MSAs), in and of themselves, are not necessarily “better”. There are aspects where moving to a distributed architecture from a monolithic one can improve a system (e.g. enabling selective load balancing and incremental deployment). However, if someone isn’t capable of building a modular monolith, distributing the pieces is unlikely to help matters.

See the full post on the Iasa Global site (a re-post, originally published here).

Negotiating Estimates

Congress of Vienna

In my previous post dealing with Ron Jeffries (since revised) “Summing up the discussion”, I focused strictly on the customer-focused aspects. I did, however, note some language regarding negotiating estimates that I wanted to touch on:

“Negotiating” estimates is deeply embedded into most cultures. It probably started in the marketplace in the village in ancient Greece, where the carrot guy tried to get three hemitetartemorions for his carrots, and your great-to-the-nth grandmother talked him down to one.
We assume that a contractor’s estimate has fat in it and we assume that we need tough negotiation to squeeze it out. The better the contractor is at estimating, the more this process hurts him, because he has nothing left to squeeze. And in the end, it hurts the buyer as well: the only way the contractor has to survive is to cut quality.

Jeffries is absolutely correct that negotiation is an ancient tradition. Likewise, he is correct that shaving an accurate estimate may well end up costing the customer in reduced quality (especially if giving point estimates, which is an incredibly poor practice, in my opinion). What he fails to mention, however, is that negotiating an estimate need not happen. In fact, negotiating an estimate without negotiating the deliverable is pretty much the worst possible thing you could do. You’re risking either your profit margin or any future business with the customer and quite possibly, both. It’s a bad idea for a vendor and incredibly ill-considered for in-house IT (it’s not like you can take the money and run).

In my opinion, when someone is willing to change an estimate without a change in what they’re estimating, it’s a bad sign. If you adequately understand the information at hand, have some experience, and have made a good faith effort, there’s no reason to be willing to change the estimate without learning more about what is or is not needed. It’s not really a negotiation if only one party is getting what they want, particularly if the other party is getting abused. For the abused party, striking back (by shaving quality without the customer’s knowledge) is counter-productive. The only negotiation should be what’s in scope and what’s not.

In a balanced relationship, the provider can explain what the customer is getting for their money and the customer realizes they won’t get something for nothing. Communication and collaboration can provide the basis for trust. Trust is essential for both parties to become partners in delivering the best possible product.

‘Customer collaboration over contract negotiation’ and #NoEstimates

Faust and Mephistopheles

Since my last post on #NoEstimates, the conversation has taken an interesting turn. Steve McConnell weighed in on July 30th, with a response from Ron Jeffries on the following day. There have been several posts from each since, with the latest from Jeffries titled “Summing up the discussion”. In that post, he states the following (emphasis added):

The core of our disagreement here is that Steve says repeatedly that if the customer wants estimates, then we should give them estimates. He offers the Manifesto’s “Customer collaboration over contract negotiation”1 as showing that we should do that. (It’s interesting that he makes that point in a kitchen where surely he expects that the couple and the contractor(!) are going to negotiate a contract.)

To anyone who has been even minimally involved with business, the idea that a contract will be negotiated should not a surprise, no matter how agile the provider. In spite of his footnote warning “I confess that I am singularly unamused when people bludgeon me with quotes from the Manifesto”, I have to note that the Agile Manifesto does say “…while there is value in the items on the right, we value the items on the left more” rather than “only the items on the left have value”.

In the post, Jeffries says that in an agile transformation, the “…contractor-customer relationship gets inverted”. I have to confess that the statement is a bit baffling. A situation where “the customer is always right” can certainly degrade into a no-win scenario. Inverting that relationship, however, is equally bad. Good, sustainable business relationships should be relatively (even if not perfectly) balanced. Moving away from minutely detailed contracts that prevent any change without days of negotiation just makes sense for both parties. Eliminating all obligation on the part of the provider, however, is a much harder sell.

Jeffries asserts that “Agile, if you’ll actually do it, changes the fundamental relationship from an estimate-commitment model to a truly collaborative stream of building up the product incrementally.” If, however, you fail to meet your customer’s needs (and, as I pointed out in “#NoEstimates – Questions, Answers, and Credibility”, one of the best lists of why estimates are needed for decision-making comes from Woody Zuill himself), how collaborative is that? Providing estimates with the caveat that they are subject to change as the knowledge about the work changes is more realistic and balanced for both parties. That balance, in my opinion (and experience) is far more likely to result in a collaboration between customer and provider than any inversion of the relationship.

Trust, is essential to fostering collaboration. If we fail to understand our customers’ needs (which impression is given when we make statements like that in the tweet below), we risk fatally damaging the trust we need to create an effective collaboration.

Cause and Effect – Cargo Cults and Carts Before Horses

Sometimes our love of shortcuts can make us really stupid. Take, for example, the idea of “Fail Fast”. As Jeff Sussna observed in his post “Rethinking Failure”, “Suddenly failure is all the rage.” He also noted:

By itself, failure is anything but good. Making the same mistake over and over again doesn’t help anyone. Failure leads to success when I learn from it by changing my behavior or understanding in response to it. Even then, it’s impossible to guarantee that my response will in fact lead to success. The validity of any given response can only be evaluated in hindsight.

Dan McClure, in “Why the “Fail Fast” Philosophy Doesn’t Work”, agreed:

If your only strategy for exploring the unknown is to pick up rocks and look underneath, then the more rocks you turn over the better. The problem is that for real world innovations, test and reject doesn’t scale well. For disruptive ideas with the potential to make a difference in the market there are lots and lots of rocks.

The value in “Fail Fast” lies in the “Fast” part; there’s no magic in the “Fail”. If you’re going to fail, finding out about it sooner, rather than later, is less costly. Less costly, however, is a far cry from best. Succeeding obviously works much better than failing fast, meaning that methods which allow you to evaluate without incurring the time, pain, and expense of a failure are a better choice when available.

Another example of this phenomenon is what Matt Balantine recently referred to as “investor-centric” development:

That, of course, creates an interesting rabbit hole – investors chasing products that will be “hot” and products designed to appeal to investors rather than customers (which would result in the product becoming “hot”). Matt’s comment from his post “What if the answer isn’t software?” applies, “I’ve no doubt that we are seeing real issues and opportunities being ignored in the pursuit of the rainbow-pooping unicorns.”

Yet another example of magical thinking is via believing in the “Great Man Theory”. People like Steve Jobs and Elon Musk have achieved great things, but as a result of what they did, not who they are. Divorced from their context, it’s a hard sell to argue that they would be equally successful.

Effectiveness is more likely to come from systems thinking than magical thinking. Understanding cause and effect as well as interrelationships and context makes the difference between rational decision-making and superstition.

Microservices, Monoliths, and Conflicts to Resolve

Two tweets, opposites in position, and both have merit. Welcome to the wonderful world of architecture, where the only rule is that there is no rule that survives contact with reality.

Enhancing resilience via redundancy is a technique with a long pedigree. While microservices are a relatively recent and extreme example of this, they’re hardly groundbreaking in that respect. Caching, mirroring, load-balancing, etc. has been with us a long, long time. Redundancy is a path to high availability.

Centralization (as exemplified by monolithic systems) can be a useful technique for simplification, increasing data and behavioral integrity, and promoting cohesion. Like redundancy, it’s a system design technique older than automation. There was a reason that “all roads lead to Rome”. Centralization provides authoritativeness and, frequently, economies of scale.

The problem with both techniques is that neither comes without costs. Redundancy introduces complexity in order to support distributing changes between multiple points and reconciling conflicts. Centralization constrains access and can introduce a single point of failure. Getting the benefits without the incurring the costs remains a known issue.

The essence of architectural design is decision-making. Given that those decisions will involve costs as well as benefits, both must be taken into account to ensure that, on balance, the decision achieves its aims. Additionally, decisions must be evaluated in the greater context rather than in isolation. As Tom Graves is fond of saying “things work better when they work together, on purpose”.

This need for designs to not only be internally optimal, but also optimized for their ecosystem means that these, as well as other principles, transcend the boundaries between application architecture, enterprise IT architecture, and even enterprise architecture. The effectiveness of this fractal architecture of systems of systems (both automated and human) is a direct result of the appropriateness of the decisions made across the full range of the organization to the contexts in play.

Since there is no one context, no rule can suffice. The answer we’re looking for is neither “microservice” nor “monolith” (or any other one tactic or technique), but fit to purpose for our context.

Who Needs Architects? Because Complexity Emerges

Why would you want to constrain creativity by controlling (Note: controlling does not necessarily imply dictating) the architecture of a system?

As Roger Sessions recently tweeted:

Another reason came out in an exchange between Roger and me:

Simplicity (“…as simple as possible, but not simpler”) is certainly a desirable system quality. Unnecessary complexity directly impairs maintainability and can indirectly affect qualities such as testability, security, extensibility, availability, and ease of deployment just to name a few. The coherent structure necessary to create and maintain simplicity when multiple people are involved is unlikely to happen accidentally. When it’s lacking, the result is often what Brian Foote and Joseph Yoder described in their 1999 classic “Big Ball of Mud”:

A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition.

Sixteen years later, this tendency to entropy via unnecessary complexity remains an issue. As Roger Sessions recently noted on his blog:

As IT systems increase in size, three things happen. Systems get more vulnerable to security breaches. Systems suffer more from reliability issues. And it becomes more expensive and time consuming to try to make modifications to those systems. This should not be a surprise. If you are like most IT professionals, you have seen this many times.

What you have probably not noticed is an underlying pattern. These three undesirable features invariably come in threes. Insecure systems are invariably unreliable and difficult to modify. Secure systems, on the other hand, are also reliable and easy to modify.

This tells us something important. Vulnerability, unreliability, and inflexibility are not independent issues; they are all symptoms of one common disease. It is the disease that is the problem, not the symptoms.

Kent Beck, in a post on FaceBook, “Taming Complexity with Reversibility”, recently noted the same:

As a system scales, whether it is a manufacturing plant or a service like ours, the enemy is complexity. If you don’t confront complexity in some way, it will eat you. However, complexity isn’t a blob monster, it has four distinct heads.

  • States. When there are many elements in the system and each can be in one of a large number of states, then figuring out what is going on and what you should do about it grows impossible.
  • Interdependencies. When each element in the system can affect each other element in unpredictable ways, it’s easy to induce harmonics and other non-linear responses, driving the system out of control.
  • Uncertainty. When outside stresses on the system are unpredictable, the system never settles down to an equilibrium.
  • Irreversibility. When the effects of decisions can’t be predicted and they can’t be easily undone, decisions grow prohibitively expensive.

If you have big ambitions but don’t address any of these factors, scale will wreck your system at some point.

Kent’s conclusion, “What changes–technical, organizational, or business–would you have to make to identify such decisions earlier and make reversing them routine?”, implies that addressing these factors requires systemic rather than localized response.

The counter-argument that’s frequently made is that the architecture can “emerge” from the implementation by doing the “simplest thing that can possibly work” and building on that. I touched briefly on this in my last post. To that, I’d add that not all changes are the same. Trying to bolt on fundamental, cross-cutting concerns (e.g. security, scalability, etc.) after the fact risks major (i.e. expensive) architectural refactoring. This type of refactoring is generally a hard sell and understandably so. That difficulty makes the Big Ball of Mud that much more likely.

This does not mean that the concept of emergence has no place in software architecture. As the various contexts making up the architecture of the problem are identified, challenges will emerge and need to be reconciled. With this naturally occurring emergence, it makes little sense to artificially generate challenges by refusing to “peek” at what’s ahead. As Ruth Malan has observed, architecture should be both “intentional and emergent”:

This need to contend with emergent issues continues for the entire lifetime of the system:

I’ve noted in the past that both planning and design share similarities. Regardless of the task, an appropriate design/plan provides a coherent direction that enhances the likelihood of success. Without it, Joe Dager’s question below is directly on point:

Form Follows Function on SPaMCast 353

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on learning styles and Steve Tendon discussing knowledge workers, along with a Form Follows Function installment on why microservice architectures are not a silver bullet.

In SPaMCast 353, Tom and I discuss my post “Microservice Architectures aren’t for Everyone”.