By Gordon Rugg
Why don’t clients and customers make their mind up about what they want?
There are several reasons, all of which make sense in hindsight, but that aren’t immediately obvious.
This article is a short introduction to one of those reasons, that can be handled swiftly, cheaply and easily. I’ll return to this topic in more depth in later articles.
In the part of the Verifier framework that deals with getting information out of people, there’s a category called “Future knowledge types” at the top of the diagram.
It’s subdivided into “Discovered” and “Negotiated”.
The “Negotiated” sub-type is for where the client isn’t a single individual, but is a group of people, who have to sort out the requirements among themselves. This can lead to all sorts of complications during development of a product, whether it’s a new type of widget or a skyscraper. This is a well recognised problem, and there’s a lot of literature on topics such as stakeholder analysis, conflict resolution, negotiation, etc.
The other sub-type is the one I’ll be focusing on in this article, namely “Discovered” requirements. Here’s the background story about how requirements specialists came to view requirements in a new way.
Discovery and waterfalls
Back in the old days of software development, the usual approach was to ask the clients for a complete set of specifications and requirements. The developer then went away, built what was described in the specs and reqs, and then took the resulting product to the client for acceptance testing and, in theory, sign-off.
This was known as the waterfall method. Once the client had signed off the specifications and requirements, the client was committed to them, in the same way that a log going over a waterfall was committed; there was no going back. Another similarity with going over a waterfall was that the result was often unpleasant.
Why was it often unpleasant? Because the client often discovered that the software had huge, glaring problems in it. Typically, these problems involved features so fundamental that the client had not bothered to spell them out, and had assumed that the developer would not need to be told about them. It’s the equivalent of discovering that the architect hadn’t made the doors in your new house big enough for you to get through, on the grounds that you hadn’t actually specified that requirement explicitly, and that the miniature doors were much neater and much easier to install.
The next stage typically involved bad tempers, bad language, and quite often letters from lawyers. The clients generally took the view that it was the developers’ job to establish the requirements they needed to use; the developers usually took the view that it was the clients’ responsibility to make their minds up about what they wanted.
Discovery and spirals
This wasn’t a very satisfactory situation, and one response was the development of Boehm’s spiral model of software development. This was based around a cycle of designing and building interspersed with testing and evaluation, which were used to adjust the design at the start of the next cycle, and so on. This meant that there were repeated chances to check that the design was going in roughly the direction that the client wanted, and it also meant that the project still had a clear structure, rather than being an endless unstructured mess.
This approach improved the situation, but it didn’t explain why major requirements so often slipped through the net, or why clients so often wanted to make changes. The answer to those questions is in the middle part of the diagram of knowledge types.
The people using the waterfall method had implicitly assumed that there was just one type of memory, and that the reason people omitted to mention particular requirements was either deliberate or the result of a glitch in human memory.
The reality is more complex. There are numerous types and sub-types of human memory, and of communication. Some disciplines at least made the distinction between explicit knowledge and tacit, unspoken knowledge, but that was still too broad-brush. When you dig into the psychology literature, you find that there are a lot of types of memory and a lot of factors that affect memory. It’s not an impossibly huge number, but as the diagram above shows, you really need at least a dozen categories to start making proper sense of it all.
To understand the problem of clients not knowing what they really want, as opposed to not saying what they really want, we need to use two of the categories from the table.
One is the “Discovered” category that we saw earlier. Clients can’t know what’s possible if they haven’t encountered it before. That realisation isn’t exactly rocket science, but it’s a major constraint.
That leads on to the second particularly relevant category, which is “Recognition versus recall”. Recognition is passive memory; you see something, and you recognise it. Recall is active memory, where you’re actively trying to retrieve a memory. Recognition is usually much more powerful than recall. For instance, if you were asked to name all the countries in Europe, you probably wouldn’t recall Andorra, though you’d probably recognise it as a European country when you saw its name.
People are pretty good at recognising whether or not they’d want a particular feature in a product when they see it. They’re not so good at recalling all the features that might be relevant, or at knowing what features are possible.
There’s another common pair of categories that often lead to clients not saying what they want, even when the client is trying hard to be helpful. This is particularly likely to affect important features. These categories are “Taken for Granted Knowledge” and “Not Worth Mentioning Knowledge”. They’re pretty much what they sound like. People don’t mention things that they assume you know already, or that they think are too trivial to mention. People often get those decisions wrong. Often, something doesn’t get mentioned precisely because it’s so familiar a part of the client’s life that they assume it must be equally familiar to everyone else.
So, what can you do about this situation? Fortunately, there’s a solution that’s usually cheap, swift and simple (or at least, a lot cheaper, swifter and simpler than making a major mistake and having to clear up the mess). It involves combining three complementary approaches.
Non-functional prototyping combined with scenarios and think-aloud technique
That phrase may look complex, but the core concepts are actually pretty straightforward.
First, non-functional prototyping.
If people are good at recognising their desired features when they see them, an obvious and simple solution is to show the client what options are possible.
As is often the case, something similar to this approach is already widely used in different fields. Architects, for instance, routinely produce drawings, and often produce models, that can be shown to clients for feedback early in the design process.
Unfortunately, as is also often the case, the current approaches often get it nearly right, but still have some issues to resolve. Architects, for instance, are very good at interpreting architectural drawings; clients are often very bad at interpreting architectural drawings. Artists’ impressions of what the building will look like are usually good for giving clients an idea of the overall look and feel of the building, but they’re not usually so good for systematically identifying points where the client has specific requirements that haven’t yet been picked up.
There are some useful methods and concepts from user-centred software design that can be applied in other fields, and that are easy to use, efficient, and user-friendly.
One is non-functional prototyping. This involves producing a mockup that doesn’t do anything (hence, non-functional) but that is a realistic simulation of the product. In the case of software, one common approach is to mock up a series of screens using PowerPoint, with each screen on a separate slide.
How do you know what series of screens to link together for a particular simulation? One useful approach is scenarios. The word has been over-used in recent years, but the concept is a powerful one, if you apply it appropriately. In this context, “appropriately” involves asking the client to give you a range of common scenarios, dangerous scenarious, desirable scenarios, etc. By choosing “boundary” scenarios in this way, you can get at the rare but important cases that can make the difference between a brilliantly successful design and a disaster. Once you have your list of scenarios, you can work through them step by step with the client, checking how well each step maps on to your mockup.
You can complement the mockups and scenarios with laddering, to investigate their higher-level goals, and to unpack any technical or subjective terms that the respondents use.
You can also complement the mockups and the scenarios with think-aloud technique. It’s what it sounds like. You ask the client to think aloud while they’re looking at the mockup, and while they’re working through the scenario, and you record what they’re saying, and you listen very carefully to it afterwards, and you set your designer’s ego aside and work out how to change the design so that the client loves you.
That last part is hard for most designers, and for most human beings, come to that. Learning how to do ego-free design is difficult. However, it’s an invaluable skill once you master it.
In practice, the first feedback loop is usually the one where the most dramatic changes occur. The second loop is usually about customisation of features, and the third is about fine-tuning.
In our experience with developing the Search Visualizer software, we saw this pattern. The users weren’t initially expecting to see something so different from other search engine interfaces, but rapidly saw ways to use its features. They then typically asked for customised features to be added, within the same overall design, and then in the next feedback loop they were usually focused on minor tweaks, such as phrasing on the search bar.
We’ll be returning to all these issues in more depth in later articles. We’ll also be writing about related topics, including schema theory, script theory and design rationale.
As usual, I’ve indicated useful concepts in bold italic, and haven’t given specific references when it’s easy to find material about the concepts online.
There are some links below to some of our tutorial articles that you might find useful; tutorial articles can be hard to locate online.
There’s an article about think-aloud technique here on our blog site:
Our article about reports includes a section on scenarios:
There’s an overview of types of memory and knowledge, including Taken For Granted and Not Worth Mentioning Knowledge within our Verifier approach here:
There’s a tutorial article on laddering on our web site:
There’s more about reports and laddering, and about elicitation methods in general, in my book with Marian Petre, A Gentle Guide to Research Methods. It contains worked examples of using each method, as well as guidance on which methods to use for which purposes.
It’s available on Amazon here:
There’s much more about all of this in my book Blind Spot:
The title of this post comes from an article by Susie Hooper and myself about why people can’t know what they need:
Rugg, G. & Hooper, S. (1999). Knowing the unknowable: the causes and nature of changing requirements. Proceedings of the EMRPS’99 workshop, Venice, 25-26 November, 1999.
A closing thought: If you’ve never been to Venice, you might like to know that it really is as beautiful as it looks in the movies and snapshots; if you go out of season, it’s wonderful.