Many see ambiguity as antithetical to software requirements. However, as Ruth Malan has observed:
Pick your battles with ambiguity careful. She is a wily foe. Not to be dominated. Rather invited to reveal the clarities we can act on. Make decisions with the understanding that we need to watchful, for our assumptions will, sooner or later, become again tenuous in that fog of ambiguity and uncertainty that change churns up.
Premature certainty, locking in too soon to a particular quality of service metric or approach to a functional requirement can cause as many problems as requirements that are too vague. For quality of service, also known as “non-functional”, requirements, this manifests as metrics that lack a basis in reality, metrics that do not account for differing circumstances, and/or metrics that fail to align with their objective. For functional requirements, premature certainty presents as designs masquerading as requirements (specifying “how” instead of “what”) and/or contradictory requirements. For both functional and quality of service requirements, this turns what should be an aid to understanding into an impediment, as well as a source of conflict.
Quality of service requirements are particularly susceptible to this problem. In order to manage quality, metrics must be defined to measure against. Unfortunately, validity of those metrics is not always a priority. Numbers may be pulled from thin air, or worse, marketing materials. Without an understanding of the costs, five 9s availability looks like a no-brainer.
Committing to a metric without understanding the qualitative value of that measure is risky. A three-second response time sounds quick, but will it feel quick? Is it reasonable given the work to be performed? Is it reasonable across the range of environmental conditions that can be expected? How exactly does one measure maintainability? As Tom Graves, in “Metrics for qualitative requirements” noted:
To put it at perhaps its simplest, there’s a qualitative difference between quantitative-requirements and qualitative ones: and the latter cannot and must not be reduced solely to some form of quantitative metric, else the quality that makes it ‘qualitative’ will itself be lost.
In another post, “Requisite Fuzziness”, Tom points out that measures are proxies for qualities, not the qualities themselves. A naive approach can fail to meet the intended objective: “The reality is that whilst vehicle-speed can be measured quite easily, often to a high degree of precision, ‘safe speed’ is highly-variable and highly-contextual.” This context-bound nature begets a concept he refers to as “requisite fuzziness”:
It’s sort-of related to probability, or uncertainty, but it’s not quite the same: more an indicator of how much we need to take that uncertainty into account in system-designs and system-governance. If there’s low requisite-fuzziness in the context, we can use simple metrics and true/false rules to guide decision-making for that context; but if there’s high requisite-fuzziness, any metrics must be interpreted solely as guidance, not as mandatory ‘rules’.
The benefits of requisite fuzziness for functional requirements is somewhat counter-intuitive. Most would argue against ambiguity, as indeed Jeff Sutherland did in “Enabling Specifications: The Key to Building Agile Systems”:
The lawyers pointed out that a patent application is an “enabling specification.” This is a legal term that describes a document that allows the average person knowledgeable in the domain to create the feature without having any discussion with the originators of the enabling specification.
In general, requirements are NOT enabling specifications. On a recent project at a large global company we discovered that hundreds of pages of requirements were not enabling specifications. On the average 60% of what was in the documents was useless to developers. It caused estimates to double in size. Even worse, 10% of what was needed by developers to implement the software was not in the requirements.
The enabling specifications used at PatientKeeper provided a global description of a feature set framed as lightweight user stories with screen shots, business logic, and data structures. The enabling specification was used to generate user stories which then formed the product backlog. The global feature description was updated regularly by the Product Owner team and was a reference to the state of the system that allowed developers to see where the user stories in the product backlog came from.
A user story needes to be a mini-enabling specification for agile teams to operate at peak performance. If it is not, there will be the need for continued dialogue with the Product Owner during the sprint to figure out what the story means. This will reduce story process efficiency and cripple velocity.
While this level of detail may well enable a team to develop efficiently, it may cripple the team’s ability to develop effectively. Given the interdependent relationship between architecture and requirements, overly prescriptive requirements can introduce risk. When design elements (such as “data structures” above) find their way into requirements, it may be that the requirement cannot be implemented as specified within the constraints of the architecture. Without the dialog Jeff referred to, either the requirement will be ignored or the integrity of the architecture will be violated. Neither of these options are advisable.
Another danger inherent in this situation is that of solving the wrong problem. This was addressed by Charlie Alfred in “Invisible Requirements”:
Coming to solutions (requirements) too quickly often times overlooks potentially more beneficial solutions. To illustrate this, consider the Jefferson Memorial.
Several years ago, excessive erosion of the Jefferson Memorial was noticed. A brief investigation identified excessive cleaning as the cause. Since the memorial must be kept clean, more investigation was necessary. Bird droppings were identified as the culprit, so actions were taken to have fewer birds around.Eventually, however, someone asked why the birds were such a problem with the Jefferson Memorial and not the others. Another study determined that the birds frequented the memorial, not for love of Jefferson, but for love of the many tasty spiders that made their home there. Probing further, the spiders were thriving because the insect population was proliferating.Finally, understanding that the insects were attracted by the memorial lights at dusk and dawn identified the ultimate solution. Turn off the lights. Initial solutions driven by quick decisions by memorial managers (i.e. powerful stakeholders) provided expensive ill-suited solutions for an ill-understood problem. The root cause and final solution requirement were well hidden, only brought to light by extensive and time consuming trial and error solutions. Each required solution inappropriately framed the problem, missing the associated hidden causes and final necessary requirement.
“Expensive”, coupled with “extensive and time consuming”, should give pause, particularly when used to describe failures. Naively implementing a set of requirements without technical analysis may well harm the customer. In “Changing Requirements: You Have a Role to Play!”, Raja Bavani noted:
You have to understand what business analysts or product owners provide you. You have to ask questions as early as you can. You have to think in terms of test scenarios and test data. You have to validate your thoughts and assumptions whenever you are in doubt. You have to think about related user stories and conflicting requirements. Instead of doing all these, if you are going to remain a passive consumer of the inputs received from business analysts or product owners, I am sure you are seeding bug issues.
While we don’t want requirements that are deliberately undecipherable, neither can we expect requirements that are both fully developed and cohesive with the architecture as a whole. Rather, we should hope for something like what Robert Galen suggested, that “communicates…goals while leaving the flexibility for my architect to do their job”. They should be, according to J. B. Rainsberger, a ticket to a conversation.
Lisa Crispin captured the reason for this conversation in “Helping the Customer Stick to the Purpose of a User Story”:
Make sure you understand the *purpose* of a user story or feature. Start with the “why”. You can worry later about the “how”. The customers get to decide on the business value to be delivered. They generally aren’t qualified to dictate the technical implementation of that functionality. We, the technical team, get to decide the best way to deliver the desired feature through the software. Always ask about the business problem to be solved. Sometimes, it’s possible to implement a “solution” that doesn’t really solve the problem.
Likewise, Roman Pichler observed:
If I say, for instance, that booking a training course on our website should be quick, then that’s a first step towards describing the attribute. But it would be too vague to characterise the desired user experience, to help the development team make the right architecture choices, and to validate the constraint. I will hence have to iterate over it, which is best done together with the development team.
Rather than passively implementing specifications and hoping that a coherent architecture “emerges”, iteratively refining requirements with those responsible for the architecture stacks the deck in favor of success by surfing ambiguity:
“When you ask a question, what two word answer distinguishes the architect?” And they didn’t miss a beat, answering “It depends” in an instant. “That’s my litmus test for architects.” I told them. “So, how do I tell a good architect?”…I said “They tell you what it depends on.” Yes, “it depends” is a hat tip to multiple simultaneous possibilities and even truths, which is the hallmark of ambiguity (of the kind that shuts down those who are uncomfortable with it). The good architect can sense what the dependencies are, and figure what to resolve and what to live with, to make progress.
Ruth Malan, A Trace in the Sand
13 thoughts on “Beware Premature Certainty – Embracing Ambiguous Requirements”
Bravo! Excellent post, Gene
I really like the way you captured the different dimensions of “requirements uncertainty”:
o the deep (not superficial) meaning of a requirement
o what the priority of a requirement is
o how likely a requirement is to evolve (and what might trigger this)
o whether a requirement is achievable
o how well a requirement can be satisfied
o what constraints a requirement imposes on a solution and what trade-offs this causes
I also really like the way you weave in the way that the multiple contexts of a problem magnify these dimensions of uncertainty. I continue to be impressed with the way you always manage to keep this fact at the front of your consciousness, and articulate this reality very clearly.
Over the past 3 years, I’ve worked with several medical device companies. One constant, is the high degree of difference in the problem space that results from changes in geography, facility size and ownership, and clinical practice.
One final observation is that ambiguity, like premature clarity, can lead to problems of its own. It seems to me that organizations that try to define requirements before they deeply understand the problem often end up with:
o too much ambiguity
o premature clarity, or
o in the worst case, both.
By contrast, organizations that invest in end user value, challenges, architecture and risk assessment/mitigation are able to reduce premature clarity AND ambiguity at the same time.
Thanks, Charlie…very much appreciated.
Your final observation sums it up perfectly – without understanding the problem space, any solution is at risk for failure from what we don’t know, what we think we know, or both.
a very substantial, well-argued post. I enjoyed how you used relevant views by others as points of reference for your own thoughts.
Design decisions posing as requirements keeps being a problem in my work environments. In my experience, exploring the problem is often neglected in favour of jumping to conclusions on solution design.
It’s somewhat comforting to read others struggle with similar difficulties.
Indeed, it’s a very common problem. People tend to proffer solutions instead of just defining problems. Another problem is that (at least in my experience) we’ve been conditioned to try to put numbers around things…that’s great when the numbers are meaningful, but a potential problem when they have no basis.
Pingback: “It Depends” and “I Don’t Know” | Form Follows Function
Please note, this isn’t a case of an developer carping about the business unit. I’m not- they’re neither malicious or dumb. This project is running very differently than they are used to and I’m quite confident that as the project moves on that these problems will smooth out through experience. However, the couple of examples below, I think, highlight some common problems. Hopefully they even show why they’re problems.
“A user story needs to be a mini-enabling specification for agile teams to operate at peak performance. If it is not, there will be the need for continued dialogue with the Product Owner during the sprint to figure out what the story means. This will reduce story process efficiency and cripple velocity.”
This is something I struggle with quite a bit. Sometimes it’s incomplete, or even untestable, acceptance criteria. Sometimes the “As an [actor] I need [need] because [reason]” pattern of the story is so vague that it becomes clear that the business unit isn’t sure what they want. Sometimes the story is full of implementation details and contains very little of the business need itself.
“Committing to a metric without understanding the qualitative value of that measure is risky. A three-second response time sounds quick, but will it feel quick? Is it reasonable given the work to be performed? Is it reasonable across the range of environmental conditions that can be expected?”
Another current struggle (did you poll my life before writing this?). We’re in the middle of a projected 18-month ecom project. With less than two months down, and little to no web work done yet, I find myself in meetings listening to people ask for a 3-second page load time. The page hasn’t been built yet. In fact, we’re relying heavily on a CMS- page design/creation isn’t even in the project scope. And yet.
No worries, you’re definitely not alone in this. People seem to have a real compulsion to present their needs in terms of a solution. I always start by hearing them out, then trying to pull the need out by exploring what they’ve asked for and the motivations behind it. Not to be repetitive, but I really like Rainsberger’s dictum that a user story is a ticket to a conversation. I think that tracks much more closely to reality than Sutherland’s statement.
As to the load time example, there’s another story there (I had a product owner that had read “you should always use images that will load in three seconds or less” and translated that into “all images should load in three seconds or less” – a lesson in physics and logic ensued). I typically point out that after the low hanging fruit of redundant/inefficient code and web page size, the main variables I have control over are the machine specs and the amount of work done. If the time limit is fixed, how much money are they willing to throw at bigger boxes and failing that, what parts of the feature are they willing to cut to stay within it?
Pingback: Commitment | Form Follows Function
Pingback: There is No “Best” | Form Follows Function
Pingback: Beware Premature Certainty – Embracing Ambiguous Requirements | Iasa Global
Pingback: “It Depends” and “I Don’t Know” | Iasa Global
Pingback: Design Communicates the Solution to a Problem | Form Follows Function
Pingback: What Makes a Microservice “Micro”? | Form Follows Function