Who Needs Architects? Because YAGNI Doesn’t Scale

If you choose not to decide
You still have made a choice

Rush – “Free Will”

Bumper sticker philosophies (sayings that are short, pithy, attractive to some, and so lacking in nuance as to be dangerous) are pet peeves of mine. YAGNI (“You Ain’t Gonna Need It”) is right at the top of my list. I find it particularly dangerous because it contains a kernel of truth, but expressed in a manner that makes it very easy to get in trouble. This is particularly the case when it’s paired with other ones like “the simplest thing that could possibly work” and “defer decisions to the last responsible moment”.

Much has already been written (including some from me) about why it’s a bad idea to implement speculative features just because you think you might need them. The danger there is so easy to see that it’s not worth reiterating here. What some seem to miss, however, is that there is a difference in implementing something you think you might need and implementing something that you know (or should know) you will need. This is where “the simplest thing that could possibly work” can cause problems.

In his post “Yagni”, Martin Fowler detailed a scenario where there’s a temptation to implement features that are planned for the future but not yet ready to be used by customers:

Let’s imagine I’m working with a startup in Minas Tirith selling insurance for the shipping business. Their software system is broken into two main components: one for pricing, and one for sales. The dependencies are such that they can’t usefully build sales software until the relevant pricing software is completed.

At the moment, the team is working on updating the pricing component to add support for risks from storms. They know that in six months time, they will need to also support pricing for piracy risks. Since they are currently working on the pricing engine they consider building the presumptive feature [2] for piracy pricing now, since that way the pricing service will be complete before they start working on the sales software.

Even knowing that a feature is planned is not enough to jump the gun and implement it ahead of time. As Fowler points out, there are carrying costs and opportunity costs involved in doing so, not to mention the risk that the feature drops off the radar or its details change. That being said, there is a difference between prudently waiting to implement the feature until it’s time and ignoring it altogether:

Now we understand why yagni is important we can dig into a common confusion about yagni. Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify. Yagni is only a viable strategy if the code is easy to modify, so expending effort on refactoring isn’t a violation of yagni because refactoring makes the code more malleable. Similar reasoning applies for practices like SelfTestingCode and ContinuousDelivery. These are enabling practices for evolutionary design, without them yagni turns from a beneficial practice into a curse.

In other words, while it makes sense to defer the implementation of the planned feature, it’s also smart to insure that there’s sufficient flexibility in the design so that the upcoming feature won’t cause refactoring to the extent that it’s almost a rewrite. Deferring the decision to do necessary work now is a decision to incur unnecessary rework later. Likewise, techniques implemented to ameliorate failure (circuit breakers in distributed systems, feature flags, etc.) should not be given the YAGNI treatment, even when you hope they’re never needed. The “simplest thing that could possibly work” may well be completely inadequate for the messy world the system inhabits.

It’s probably impossible to architect a system that is perfect, since the definition of perfection changes. It is possible, however, to architect systems that deal gracefully with change. That requires thought, not adherence to simplistic slogans.

“Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?” on Iasa Global

In my part of the world, it’s not uncommon for people to say that someone wouldn’t recognize something if it “bit them in the [rude rump reference]”. For many organizations, that seems to be the explanation for the state of their enterprise IT architecture. For while we might claim to understand terms like “design”, “encapsulation”, and “separation of concerns”, the facts on the ground fail to show it. Just as software systems can degenerate into a “Big Ball of Mud”, so too can the systems of systems comprising our enterprise IT architectures. If we look at organizations as the systems they are, it should be obvious that this same entropy can infect the organization as well.

See the full post on the Iasa Global site (a re-post, originally published here).

Maybe It’s Time for Customer Driven Development

A couple of Tweets from Robert Smallshire caught my eye last month:

Seb Rose‘s reply to the first Tweet, however, went straight to the heart of the matter:

It’s not about any one practice. It’s about how that practice affects the customer.

It’s not about technology, but about customers’ needs getting fulfilled. The customer is pretty much the only one who can say definitively whether or not their needs are being fulfilled.

Learning by shipping is all well and good, assuming that it’s the only (or at least, least harmful) way to get the information needed and the customer is aware of what’s being done. As Matt Ballantine observed in his post “What if the Answer isn’t Software?”:

In the world of startup, the answer is probably that Agile development of software that is attempting to answer a non-software problem is failure. It seems to be widely accepted that 90% of software startups fail – and that’s presumably a number that doesn’t include a far greater number of embryonic ideas that don’t even make it to startup stage. Some of that failure will be because of lack of luck, some because of poor management, but many will be because they are half-arsed ideas to non-problems (“Hey, it’s like Facebook, but for frogs!”), and the others are because they are trying to address something through software which has other factors in the way (usually: people).

I don’t know many organisations that can significantly fund projects (particularly internal technology projects) to 90% rates of failure. In fact, most organisations have systematic processes in place (investment proposals) that are designed to mitigate against such numbers. Whether those processes work or not it a moot point.

Sometimes you have to build something and get it in front of people to truly know whether it will work. Sometimes (Windows 8) you can know in advance whether it’s a good idea. In either case, the customer should know when they’re paying for an experiment.

Whether you work for an in-house IT shop, an outsourcer, or an ISV, the point of your work is pleasing a customer. The proximity may be different, but the goal is not. When we talk about what we’re going to do and/or how we’re going to do it, if there’s no mention of customer impact, it’s likely that we’re on the wrong track.

Estimates – Uses, Abuses, and that Credibility Thing Again

In my last post, “#NoEstimates – Questions, Answers, and Credibility”, I brought up the potential for damaging ones credibility by proposing changes without being able to explain why the change should be beneficial. Consider the following exchange on Twitter:

When the same person states “There is a lot written about this” and “The question is young and exploration ongoing” back to back, it is reasonable to wonder how much thought has been put into the “question” versus the desired solution. It’s also reasonable to wonder if this really qualifies as a “question”:

Some people latch on to ideas that tell them they shouldn’t have to do things they don’t like to do. Responsibility, however, sometimes requires us to do things we don’t like to do. It’s not a matter of being tied to a particular practice, but a matter of avoiding things that make us look foolish such as these:

[Note: I find these dismaying not because of opposition to agility, but because I am a proponent of agility.]

None of this is to deny that some misuse, even abuse estimates. That being said, abusive managers/clients are unlikely to either surrender their “stick”. The problem is not the tool, but how it’s employed. It might be useful to consider exactly how likely they are to give up being abusive even if they do agree to quit asking for estimates.

Many of the issues around estimates stem from communication and knowledge issues in my opinion. Typically, the less we know about something, the harder it is to reason about it (size, complexity, etc.). In my experience, two practices mitigate this: providing estimates as a range (wider initially, with a narrower range when the item is better defined) rather than a single point and changing the estimate when new information arises (regardless of whether the new information is an unforeseen issue found by the team or a change requested by the customer). It shouldn’t be a surprise that it’s hard to succeed by making a single estimate up front and refusing to revise it no matter what new information comes in. That’s a bit like driving somewhere by putting on a blindfold and following the GPS without question.

Another practice that can help with uncertainty is using a proof of concept. When in unknown territory, I’ve found it’s easier to estimate how long to explore something than to estimate both the exploration and the implementation of a specific feature. Additionally, this practice makes it explicit for the customer that they’re paying for the learning, not feature itself. Transparency about the risks allows them to accurately determine how much they’re willing to pay just to find out if the tool/technology/etc. will meet their needs.

This is probably a good time to repeat the way estimates are used (from Woody Zuill’s “Why do we need estimates?”):

  • To decide if we can do a project affordably enough to make a profit.
  • To decide what work we think we can do during “a Sprint”. (A decision about amount)
  • To decide what work we should try to do during “a Sprint”. (A decision about priority or importance)
  • To decide which is more valuable to us: this story or that story.
  • To decide what project we should do: Project A or Project B.
  • To decide how many developers/people to hire and how fast to “ramp up”.
  • To decide how much money we’ll need to staff a team for a year.
  • To give a price, or an approximate cost so a customer can decide to hire us to do their project.
  • So we can determine the team’s velocity.
  • So marketing can do whatever it is they do that requires they know 6 months in advance exactly what will be in our product.
  • Someone told me to make an estimate, I don’t use them for anything.

All of these are valid enough as stated, but let’s re-word a couple

  • To help the customer decide what work we should try to do during “a Sprint”. (A decision about priority or importance)
  • To help the customer decide which is more valuable to us: this story or that story.
  • To help the customer decide what project we should do: Project A or Project B.
  • To help the customer decide how much money we’ll need to staff a team for a year.

It needs to be really clear that the customer’s needs are the important ones here.

Another use is to help determine whether a particular issue is worth addressing. It’s hard to determine the level of technical debt without a sense of what the cost is to fix versus the cost of not fixing it.

To determine value, we need to know both benefit and cost. A feature that earns a million dollars for a cost of $999,999 is a lousy deal compared to one making $10,000 for a cost of $1,000.

Shaping our practices around our customer’s needs is a good way to create a partnership with those customers. In my opinion, that’s an easier sell to the customer than “dump the estimates and let’s see what happens”.

#NoEstimates – Questions, Answers, and Credibility

This is questioning?

A recent episode of Thomas Cagley’s SPAMCast featured Woody Zuill discussing #NoEstimates. During the episode, Woody talked about his doubts about the usefulness of estimating and the value of questioning processes rather than accepting them blindly.

I’m definitely a fan of pragmatism. Rather than relying on dogma, I believe we should be constantly evaluating our practices and improving those found wanting. That being said, to effectively evaluate our practices, we need to be able to frame the evaluation. It’s not enough to pronounce something is useful or not, we have to be able to say who is involved, what they are seeking, what the impact is on them, and to what extent the outcome matches up with what they are seeking. Additionally, being able to reason about why a particular practice tends to generate a particular outcome is critical to determining what corrective action will work. In short, if we don’t know what the destination looks like, we will have a hard time steering toward it.

In his blog post “Why do we need estimates?”, Woody (rightly, in my opinion) identifies estimates as a tool used for decision-making. He also lists a number of decisions that estimates can be used to make:

  • To decide if we can do a project affordably enough to make a profit.
  • To decide what work we think we can do during “a Sprint”. (A decision about amount)
  • To decide what work we should try to do during “a Sprint”. (A decision about priority or importance)
  • To decide which is more valuable to us: this story or that story.
  • To decide what project we should do: Project A or Project B.
  • To decide how many developers/people to hire and how fast to “ramp up”.
  • To decide how much money we’ll need to staff a team for a year.
  • To give a price, or an approximate cost so a customer can decide to hire us to do their project.
  • So we can determine the team’s velocity.
  • So marketing can do whatever it is they do that requires they know 6 months in advance exactly what will be in our product.
  • Someone told me to make an estimate, I don’t use them for anything.

What is missing, however, is an alternate way to make these decisions. It’s also missing from the follow up post “My Customers Need Estimates, What Do I do?”. If the customer has a need, does it seem wise to ask them to abandon (not amend, but abandon) a technique without proposing something else? Even “let’s try x instead of y” is insufficient if we can’t logically explain why we expect “x” to work better than “y”. The issue is one of credibility, a matter of trust.

In her post “What Creates Trust in Your Organization?”, Johanna Rothman related her technique for creating trust:

Since then, I asked my managers, “When do you want to know my project is in trouble? As soon as it I think I’m not going to meet my date; after I do some experiments; or the last possible moment?” I create trust when I ask that question because it shows I’m taking their concerns seriously.

After that project, here is what I did to create trust:

  1. Created a first draft estimate.
  2. Tracked my work so I could show visible progress and what didn’t work.
  3. Delivered often. That is why I like inch-pebbles. Yes, after that project, I often had one- or two-day deliverables.
  4. If I thought I wasn’t going to make it, use the questions above to decide when to say, “I’m in trouble.”
  5. Delivered a working product.

While I can’t say that Johanna’s technique is the optimal one for all situations, I can at least explain why I can put some faith in it. In my experience, transparency, collaboration, and a respect for my stakeholders’ needs tends to work well. Questions without answers? Not so much.

Modeling the Evolution of Software Architecture

Herve Lourdin‘s tweet wasn’t aimed at modeling, but the image nicely illustrates a critical deficiency in modeling languages – showing evolution of a system over time. Structure and behavior are captured, but only for a given point in time. Systems and their ecosystems, however, are not static. A map of the destination without reference to the point of origin or rationale for choices is of limited use in communicating the what, how, and why behind architectural decisions.

“The Road Ahead for Architectural Languages” on InfoQ (re-published from IEEE Software) recently noted the following reasons for not using an architectural language (emphasis added):

  • formal ALs’ need for specialized competencies with insufficient perceived return on investment,
  • overspecification as well as the inability to model design decisions explicitly in the AL, and
  • lack of integration in the software life cycle, lack of mature tools, and usability issues.

All of the items in bold above represent usability and value issues; a failure to communicate. As Simon Brown observed in “Simple Sketches for Diagramming Your Software Architecture”:

In today’s world of agile delivery and lean startups, some software teams have lost the ability to communicate what it is they are building and it’s no surprise that these teams often seem to lack technical leadership, direction and consistency. If you want to ensure that everybody is contributing to the same end-goal, you need to be able to effectively communicate the vision of what it is you’re building. And if you want agility and the ability to move fast, you need to be able to communicate that vision efficiently too.

Simon is a proponent of a sketching technique that answers many of these communication failures:

The goal with these sketches is to help teams communicate their software designs in an effective and efficient way rather than creating another comprehensive modelling notation. UML provides both a common set of abstractions and a common notation to describe them, but I rarely find teams that are using either effectively. I’d rather see teams able to discuss their software systems with a common set of abstractions in mind rather than struggling to understand what the various notational elements are trying to show.

Simon’s colleague, Robert Annett, recently posted “Diagrams for System Evolution”, which proposes using the color-coding scheme from diff tools to indicate change: red = remove, blue = change, green = new. Simon followed this up with two posts of his own, “Diff’ing software architecture diagrams” and “Diff’ing software architecture diagrams again”, which dealt with applying Robert’s ideas to Simon’s structurizr.com tool.

Simon’s work, coupled with Robert’s ideas, addresses many of the highlighted deficiencies listed above (it even touches on the third bullet that I didn’t emphasize). Ruth Malan’s work also contains some ideas that are vital (in my opinion) to being able to visualize and communicate important design considerations – explicit quality of service and rationale elements along with organizational context elements. A further enhancement might be incorporating these into a platform that can tie elements of software architecture together with elements of solution and enterprise architecture, such as the one proposed by Tom Graves.

Given the need for agility, it might seem strange to be talking about modeling, design documentation, and architectural languages. The fact is, however, that many of us deal with inherently complex systems in inherently complex ecosystems. Without the ability to visualize a design in its context, we run the risk of either slowing down or going down. Not everyone can afford to “move fast and break things”.