By Gordon Rugg
The central theme in Blind Spot, the new book by myself and Joe D’Agnese, is the Verifier approach. The Verifier approach is a way of spotting errors in human reasoning; in particular, spotting errors made by experts dealing with long-standing problems where it looks as if the experts have ground to a halt and can’t see where to go next.
Its starting point is simple. Human beings make pretty much the same types of mistakes, regardless of their level of expertise, or of the field in which they’re working. If you know what those types of mistakes are, then you can go hunting for them.
That idea has been around for a while. What’s new about Verifier is that it tackles human error using a bigger, broader set of methods than previous approaches. For example, traditional formal logic can tell you whether someone’s chain of reasoning is logically sound, but it doesn’t give you any guidance about how to check whether a particular assumption in that chain of reasoning is actually true or not.
So, what we’ve done with Verifier is to assemble virtual toolboxes for each of the key stages in looking for errors. They’re not physical toolboxes or software toolboxes; they’re collections of methods.
There are three toolboxes for spotting errors, plus a fourth toolbox which is about preventing or reducing errors. The key concept they have in common is that each toolbox has a set of evidence-based guidelines for choosing the most suitable method for a particular purpose. We’ve erred on the side of inclusion when drawing up the toolboxes, so they tend to include large numbers of methods, even though they don’t claim to be exhaustive collections.
This article introduces one of those toolboxes; we’ll look at other toolboxes in later articles.
In practice, we’ve found that within each toolbox, there are a few methods that are invaluable, and others that we rarely use. For example, we’ve found graph theory invaluable, particularly in the forms of laddering and facet theory. We also make extensive use of the concepts of sequential processing and parallel processing, particularly pattern-matching. Most of the methods we’ve found invaluable aren’t very widely known outside their home territory, so we’ll be writing about them in future articles.
In addition to the toolboxes, Verifier has an overall process, which deals with which toolboxes to use when, and how. That process doesn’t follow neat clearly-defined steps, for various good reasons. The full story is more complex than the seven-step method that you may have seen. We’ll discuss the process after the articles about the individual toolboxes.
Toolbox 1: Elicitation
The first toolbox is about how to gather accurate, reliable information to feed into the logical testing. This is difficult, particularly when you’re dealing with information that people find difficult to put into words, or involving topics that people are reluctant to talk about. Our underyling approach here is based on the ACRE framework developed by Neil Maiden and myself. In short, it divides knowledge, memory and communication into various types and sub-types, and then maps each of those types onto a list of methods for eliciting information, identifying the appropriate elicitation method or methods for each type.
The image below shows a schematic representation of this framework.
Each of the knowledge types is subdivided, as follows.
Each of these subdivisions is mapped onto appropriate elicitation types, as shown below.
Knowledge, memory and communication types (left column) and elicitation methods (right column)
The image above is a simplified version – for example, there are numerous sub-groups of the elicitation methods above, such as structured, semi-structured and unstructured interviews. Also, the categorisation of elicitation methods can be significantly improved by focusing on the component parts within each method, as in the paper on method fragments by Neil Maiden, Peter McGeorge and myself. However, the diagram gives a sense of the underlying concepts in use.
The colouring of the boxes on the left side of the diagram is intended to indicate the progression from explicit knowledge, readily accessible via introspection, through to completely tacit knowledge. The green of “future knowledge types” is intended to indicate that the contents of these boxes are qualitatively different from the contents of the boxes which contain actual memories and knowledge, rather than guesses about future actions and preferences.
The ACRE paper describes the issues and the mappings; we’ll be blogging about this in more detail in further articles.
That’s a brief overview of the elicitation toolbox. Its key points are that it’s big, but of a tractable size, and that it has a guiding framework derived from the literatures on memory, communication and skill.
We have tutorial articles on some of the techniques mentioned in the framework, including laddering, on the Hyde & Rugg website. More articles are on the way.
The ACRE framework reference is:
Maiden, N.A.M. & Rugg, G. (1996). ACRE: a framework for acquisition of requirements.
Software Engineering Journal, 11(3) pp. 183-192
The method fragments paper reference is:
Rugg, G., McGeorge, P. & Maiden, N.A.M. (2000). Method Fragments. Expert Systems 17(5), pp. 248-257
A tutorial on card sorts:
Rugg, G. & McGeorge, P. (2005). The sorting techniques: a tutorial paper on card sorts, picture sorts and item sorts. Expert Systems, 22(3) (NOTE: this is a reprint of our 1997 paper in Expert Systems)