By Gordon Rugg
Sometimes, product development is straightforward. The client tells you what they want; you produce it; they’re happy with it, they pay you, and everything is fine. This is known in the field as the waterfall model of development; once the client has signed off on the requirements, the process then moves irrevocably onwards, like a river going over a cliff.
When you and the client are dealing with familiar territory, this approach usually works reasonably well. Sometimes, though, things don’t work that way. You’re particularly likely to hit problems when you’re developing something that’s new territory for you and/or the client.
One common problem involves the client changing their mind part-way through development.
Another involves the client being unhappy with what you produced.
Communication problems are another frequent source of trouble, with you trying to make sense of just what the client wants, and getting more and more frustrated.
If you’re in that situation, or you think there’s a risk of getting into it, you might want to try iterative non-functional prototyping. It’s a lot simpler than it sounds, and it’s a fast, cheap, efficient way of getting to the heart of what the client wants, particularly when clients don’t actually know just what they want at the start. It involves looping through mockups systematically until the requirements are clear.
This article gives a short introduction to the core concepts and the process. It should be enough to let you get started; there’s supporting material elsewhere on this blog which goes into more detail about the underpinnings, which I’ve linked to within the article.
The core concept
The core concept is simple. You build a mockup, show it to the client, get feedback, and then revise the mockup in line with the client’s feedback. You show the client the revised version, get feedback, and revise again in line with the client’s feedback. You repeat this process until the client is happy with the mockup, and then you build the actual product, based on the final mockup. Because the mockup is non-functional (i.e. it’s just a mockup, without any working parts or working software) it’s cheap and simple to produce, and easy to modify. You will usually catch the key requirements, including the major unexpected ones, within a couple of iterations (i.e. repetitions of the design/feedback loop).
A key point to note is that this version of prototyping has some significant differences from the way that prototyping is used in other fields such as engineering, architecture, and film (where it takes forms such as storyboards and rushes). This version has a much deeper set of infrastructure, in terms of methods, processes, etc, which give it a broader and more powerful scope than other forms of prototyping. The next sections work through these issues.
When and why do you use this approach?
In brief: You use this approach with clients who are unlikely to know what they want, but who will recognise it when they see it.
When a client doesn’t have any previous experience of a product, then they can’t know what they don’t know.
This means that finding out their requirements isn’t a straightforward case of asking the client to tell you what they want. Instead, you have to help the client to discover what they want. This is a very different issue from asking them what they want, and requires very different methods.
A key concept for this process is recognition. People are better at passive memory (recognition) than at active memory (recall). In the context of product requirements, a particularly important form of recognition is recognition of affordances. This involves someone realising that a particular product allows them to do useful things, which are often completely unexpected and which open up very attractive possibility spaces.
Your prototype mockups show the client what is possible, and allow the client to spot key affordances before any actual product build has begun.
This reduces the risk of the client changing their mind part-way through the build, and improves the likelihood of producing something significantly better than the client had anticipated.
There’s also a good chance that the client or other stakeholders will spot key things that you’ve missed. Often, these key things are so familiar that the client or stakeholders have taken them for granted, and not thought to mention them.
Discovering the requirements in this context often needs to proceed from two directions at the same time.
You can find out some requirements by laddering upwards on the client’s goals and values, and then laddering downwards on ways of including them in the product design. This approach tends to feel more reassuring to system and product developers who are used to traditional “interview the client first” approaches, and it’s useful for understanding the client’s context.
Other requirements, though, are better handled through showing the client what’s possible. This can give the impression of being less structured, but with experience, it can be integrated neatly with other approaches such as laddering and card sorts to provide pretty systematic coverage.
What do you measure?
The key issue is that you need to find out what matters to the client and to other stakeholders, in particular the end users. The client and other stakeholders will probably be able to tell you some of those key features in a traditional interview, but they will probably not mention a lot of other key features, for various reasons which are covered in detail in our various articles on requirements.
The short version is that you show the mockup to the client and relevant stakeholders, and get them to think aloud while they are trying it out. This will tell you which features they are noticing. There will be other important features that they don’t mention, for various reasons, which is why you’ll need to observe what they’re doing (for instance, when they look puzzled, or when they swear). To make sense of why they’re mentioning those features, you’ll need upward laddering. To make sense of subjective or technical terms that they mention, you’ll need downward laddering. To cross-check your mockup against market competitors, or against other mockups that you’ve created, you’ll probably want to use card sorts.
All these methods are quick, cheap and easy to use, once you know how to use them.
At the end of this process, you’ll know which are the key aspects of the product that you’ll focus on improving through the subsequent versions of the mockup.
How do you measure it?
In brief: The most efficient method is usually visual analogue Likert-style scales.
By the end of the previous stage, you’ll have a list of key aspects of the product. You now need to translate these into a form that you can measure easily.
For instance, suppose that one key aspect is that the product must be easy to use. You can translate this into a questionnaire question, such as: Please indicate on the scale below how easy you would find it to use this product. This would be accompanied by a scale like the one below, from our article on questionnaires.
I prefer to use visual analogue Likert-style scales for this purpose, for various reasons. One very practical reason is that these scales are much finer-grained than the usual Likert scales, which only have about ten points at most. This greater sensitivity lets you use more powerful statistics, if you’re doing statistical analysis, and also lets you use smaller sample sizes.
So, you produce a new version of your mockup, and you show it to the client and/or stakeholders, and you do observation and think-aloud as before, and then you ask them to rate that version using your visual analogue Likert-style questionnaire. The ratings tell you which features are getting high scores, and which aren’t. You now change the features of the mockup which got low scores, and repeat the process. You should know what type of changes to make as a result of laddering from the think-aloud and observation.
This sounds complicated, but it’s pretty self-evident once you start doing it, and is a lot easier to work with than the traditional feedback to the effect that the client wants the product changed, but can’t explain how.
How do you know when you’ve got there?
In brief: You’ve got there when the results from your measurements show diminishing returns.
If you’re working in an environment which is heavily into systematic processes, then you can use statistical analysis to identify when you have reached diminishing returns with your mockups, and can proceed to the actual build.
If you’re not working in a statistics-rich environment, then you can just eyeball the results from the questionnaires. You’ll be able to see when the feedback is good across the board, and when it’s stopped getting much better.
The usual pattern is for any drastic changes to occur between the first and the second mockup. This is where the client is most likely to spot unexpected affordances, and where you’re most likely to catch key requirements that the client forgot or didn’t bother to mention.
After the second mockup, it’s usually a case of fine-tuning the design until everyone’s happy with it. This might well happen by the third mockup.
At this point, you can proceed to do a more detailed project plan, with a clearer idea of what the final product will be, and with reasonable confidence that there won’t be any major changes in requirements from now on.
This method is fast and cheap. If you’re designing software, you can produce surprisingly good mockups using PowerPoint slides with action buttons.
It’s also a method that’s easy for clients and stakeholders to use, since they’re dealing with realistic mockups, rather than sketches or wireframe drawings. Often, the devil is in the detail, and an apparently trivial detail can turn out to have far-reaching implications.
Within this approach, the methodological details are also important, and are what makes this approach different from how prototypes and mockups are used in other fields. It’s based on deliberate, systematic use of multiple elicitation and measurement techniques, designed to complement each other, and to form a coherent, evidence-grounded process that fits neatly into classic project management methods.
This means that there’s an initial learning curve as you learn how to use those techniques. Fortunately, the curve is pretty manageable; most people are able to learn the techniques quickly and easily, particularly when learning via hands-on instruction.
For the broader context, any good text on user-centred design is a good place to start. Don Norman’s work on affordances is particularly important in this context.
If you’re wondering why this approach isn’t more widely used, you’re not alone. This approach has been standard best practice in Human Computer Interface design for about thirty years, and is a routine part of university computing courses, but it’s still not widely used in industry.
My guess is that it’s in an uncanny valley between two cultures. I suspect that people in the programming/IT culture view it as being about design and human factors, which are someone else’s problem, while managers and designers view it as being about software, which is the problem of programming and IT people. That’s just a guess, though. I’m planning to do some research on this issue when the legendary spare moment comes…
Notes and links
Niagra Falls: By Saffron Blaze – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15045971
Archimedes’ Screw: By: The original uploader was Ianmacm at English Wikipedia – Transferred from en.wikipedia to Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=2285256
There’s more about the theory behind this article in my latest book:
Blind Spot, by Gordon Rugg with Joseph D’Agnese
You might also find our website useful:
Overviews of the articles on this blog: