By Gordon Rugg
This article is one in a series about the problem of identifying and clarifying client requirements, using the ongoing semi-humorous example of a client’s requirement for an image of an elephant. This episode looks at ways of establishing the key issues in the client’s requirements when you’re trying to trade off costs against risks.
Finding and clarifying the client’s requirements takes time and costs money, which could be spent instead on development. However, if you make a mistake in the requirements, then correcting that mistake will cost you time and money – often more time and money than it would have cost to get the requirements right in the first place. Often, but not always. So you end up having to balance the certainty of requirements costs against the risk of greater costs further on if you get it wrong. How can you improve the odds in your favour?
One important question involves whether you’re developing a completely novel product, or just a version of an already-established product.
With completely novel products, the client by definition won’t have seen anything similar before, so the requirements process is very much about discovery and exploration and negotiation, with a lot of uncertainty, and often with rapid extreme changes in requirements, particularly in the early stages. One key challenge is identifying the boundaries for the product, particularly as regards requirements involving legal issues or health and safety issues.
With versions of already-established products, on the other hand, the client and you will both have a clear idea of the issues and options and boundaries, and the requirements process is more about choosing what the client wants from within a well-understood set of possibilities.
However, even when you’re developing what looks like a new version of a well established product, you often encounter unwelcome surprises. A clear idea isn’t always the same as a correct idea. I’ll illustrate this using a scenario where the client tells you that they want a picture of a side view of an elephant. That sounds comfortingly familiar, so you ask how big elephant should look in the picture, and the client sketches in the air and says “About this high”.
What could possibly go wrong?
The thirteenth month of the year
So, you produce a picture like the one below, and show it to the client. I’ve left in the “about this high” bar to show that it fits that requirement.
http://commons.wikimedia.org/wiki/File:Elephant_side-view_Kruger.jpg
And the client tells you that this is completely wrong, because the elephant needs to be facing the other way, because of a key requirement that they completely forgot to mention.
If you’re just producing a picture for the client, this news is irritating, but not a huge problem. If, on the other hand, you’re an architect designing a building for the client, this news could be a major problem.
Clients often forget to mention key requirements because those requirements are so familiar to them that they assume the requirements are equally familiar to everyone else. There are plenty of examples of requirements and constraints that make sense once you know the full story, but that you’d have little chance of predicting in advance. One example is organisations that run on a 13 month year, rather than a 12 month year. That’s surprisingly common; I once had two students in a group of thirty who had each worked in different companies that used a 13 month year.
If you’re producing novel software, then you’re on the lookout for these unexpected requirements from the outset. If you’re producing a new version of a familiar type of software, you’re more likely to be caught off guard.
One quick way of getting an overview of the requirements is to do some quick mockups and show them to the client. The mockups don’t always need to be sophisticated. I use whiteboards a lot for quick mockups. My office at Keele has a whiteboard with a basket of whiteboard pens beneath it, so that everyone in a meeting can sketch what they mean on the board. It’s also possible to do quick and dirty mockups on PowerPoint, especially if you’re designing software. You can mock up a series of screens as separate slides, and then show them as a slideshow to simulate moving through the software. A great advantage of visual mockups is that they involve much less risk of misunderstandings than verbal descriptions. With verbal descriptions, you often end up with each party having a clear, sensible and completely incorrect belief about what the other party has just said.
Rapid mockups help, but they won’t catch everything. They probably won’t catch rare events, or legal constraints, for example. For those, you need to find the boundary requirements – the edges of the elephant.
Here’s an illustration of what that might look like with the elephant example. You’ve flipped the image round, and done some in-depth probing about the initial requirement for height, and you’ve found where the client wants the back of the elephant to be, and that’s all fairly close to your expectations. I’ve used a light green overlay to show the requirements that you’ve confirmed, superimposed over what you’re expecting the rest of the image to look like, give or take a bit. It’s a fairly close fit so far.
Pushing the elephant analogy a little, you can establish one set of boundaries for the requirements by looking backwards, at cases in the past, which have become precedents for what’s required today. In many fields, there were landmark legal cases in the past which established legal requirements that have been in force ever since.
You can get at these backward-looking requirements using various methods, such as critical incident technique. This technique was developed by Flanagan in the middle of the last century, and is a well-established way of picking out the lessons to learn from key events in the past. Often, these events are accidents or near-accidents, and often there are official inquiries into them, and often those inquiries produce official reports that then become part of best practice and/or legal requirements. A classic example is the Therac-25 disaster, which was dissected in detail by Nancy Leveson. If the requirements involve sensitive topics, such as bad practice within the client’s organisation, then you might use some of the methods described in the previous article in this series, such as projective techniques.
Looking backward helps with one set of requirements. That’s not the whole story, though; you also need to get at forward-looking requirements, about where the client wants to be, and about possible future problems that the client wants to avoid. Continuing with the elephant analogy, you might interview the client about this, and discover that your image of the requirements now looks like this.
This doesn’t look right. The latest set of requirements looks like a head, but it’s nothing like a fit.
With both forward-looking and backward-looking requirements, there’s a high chance of the client mentally overestimating or underestimating the size of an issue. It’s fairly easy to catch this with backward-looking requirements, if you can get access to data about previous cases. It’s more difficult with forward-looking requirements.
This is a major issue if you’re developing a safety-critical system, where system failure can cause injuries or death. People aren’t very good at predicting what might go wrong, or how likely it is to go wrong. There’s been a fair amount of work into this general area. Here’s an example. If you produce a fault tree for the ways that a system or product can go wrong, and then add a wrong branch to that tree, then people are pretty good at spotting the error (for instance, if you add an error about propeller failure to a fault tree for jet engines). However, if you remove correct branches, instead of adding incorrect ones, then it’s surprising how many branches you can remove before people spot that there are branches missing. People are usually better at spotting active errors of commission rather than negative errors of omission. Human memory is also liable to numerous types of distortion and error; the work of Elizabeth Loftus has shed considerable light on this topic.
There are ways of tackling these problems. You can start by using some cradle to grave scenarios, where you begin at the start of whatever the system does, and work through to the end of whatever the system does, noting any points along the way where the scenario could go off in more than one direction. A classic everyday example is an online shopping system, where the scenario would probably start at the point where the shopper logs onto the site, and end at the point where the shopper logs off.
If you’re building a safety-critical system, then you’ll probably be considering methods such as THERP, which is a technique for human error rate prediction (hence the acronym). There are also numerous methods for predicting and assessing risks; it’s a topic that we’ll return to in later articles.
Humans tend to think about three separate but overlapping concepts when they’re dealing with risk. One is probability – how likely is it that the event will happen? Another is severity – how bad will the outcome be if it happens? The third is dread – how scary is the outcome? All of these make sense, but they have very different implications for estimation and management.
When you’re dealing with requirements, you often have to tease apart issues that such as these, where the client might be conflating issues that are logically separate, or, conversely, might be making a specialist distinction between issues that are clearly separate in the client’s field, but that are usually conflated elsewhere. For instance, some fields make a very clear distinction between elapsed time and time on task; others don’t.
Pushing the elephant analogy one last time, this issue of requirements disambiguation can help make sense of situations like the one in this image.
Here, one image chunk is pretty similar to what you were expecting, apart from the trunk pointing in a different direction, but there’s another image chunk that’s unexpected.
It’s now becoming clear that the image the client has in mind is more complex than the original description. When you dig further into the requirements, you discover that the client was actually thinking of something like this.
http://commons.wikimedia.org/wiki/File:Elephants_at_Hagenbeck.JPG
It’s mainly a picture of an elephant in side view, as originally stated, but there’s a lot more going on in addition. Requirements have a habit of being like that…
The bigger picture
We’ve seen enough elephant images and laboured analogies for one article. What other issues are there that still need to be considered when you’re doing requirements?
One issue is “upwards” requirements: Why does the client want this particular set of requirements, and what are the implications for how changeable those requirements might be, and for the choice of design solution?
Another issue is “downwards” requirements: How can you unpack the client’s requirements and turn them into specific, objective statements that let you properly assess the implications for design and production?
A third issue is measurement: What do you and the client need to measure to make complete sense of the requirements, and how do you set about measuring those things?
These will be the topics of the next three posts in this series. That should take us to the end of the elephant theme. In case you’re wondering, there’s a reason that I’ve used the theme of elephants, rather than examples from, say, software development. I’m trying to show the similarities in underlying issues across a wide range of design and development fields, and I’ve found in the past that if you use worked examples from one field, then people from the other fields tend to switch off mentally because they think those examples are unrelated to their field. Using completely different examples such as the elephant theme reduces this problem, but it can also lead to readers wondering what elephants have to do with anything. There’s probably a better way of handling this issue, but if so, I haven’t found it yet; constructive suggestions would be welcome.
I’ll be posting again about design and requirements in later articles; however, I’ll probably feel less tempted to try using an extended analogy in them…
Links and notes
This series consists of Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, and Part 8.
As usual, I’ve used bold italic for technical terms where it’s easy to find further reading. I’ve listed some specialist links below.
The Therac-25 case is described on Wikipedia:
http://en.wikipedia.org/wiki/Therac-25
There’s a copy of the Leveson paper here:
http://sunnyday.mit.edu/papers/therac.pdf
THERP is described on Wikipedia:
http://en.wikipedia.org/wiki/Technique_for_Human_Error_Rate_Prediction
Pingback: One hundred Hyde & Rugg articles, and the Verifier framework | hyde and rugg
Pingback: The “compass rose” model for requirements gathering | hyde and rugg
Pingback: 150 posts and counting | hyde and rugg
Pingback: The Knowledge Modelling Book | hyde and rugg
Pingback: Requirements, evaluation and measurement: How to tell if you’ve met a client’s goals | hyde and rugg
Pingback: Client requirements: The shape of the elephant, part 1 | hyde and rugg
Pingback: Client requirements: The shape of the elephant, part 2 | hyde and rugg
Pingback: Client requirements: The shape of the elephant, part 4 | hyde and rugg
Pingback: Client requirements: The shape of the elephant, part 3 | hyde and rugg
Pingback: Requirements that clients don’t talk about: The elephant in the room | hyde and rugg
Pingback: Client requirements: Why clients change their minds, and what you can do about it | hyde and rugg