By Gordon Rugg
The truism that a picture is worth a thousand words is true, but profoundly unhelpful. It doesn’t provide any guidance about which types of picture should be matched with which types of words.
This problem is the focus for one of the virtual toolboxes in our Verifier framework for spotting errors in expert reasoning. This toolbox deals with systematic ways of mapping data, information and knowledge onto the appropriate type of visualisation, or feature of a visualisation, such as line length or pattern or shape.
The underlying concept here is not new; earlier work using the same core approach was carried out by Bertin (1974), by Mackinlay (1986) and by Buttenfield and various collaborators. We’ve broadened the scope of previous work by drawing on the literature from other fields, including the methods of data visualisation explored by Tufte, and knowledge representation approaches from Artificial Intelligence, plus contributions from less well-known fields such as androgyny theory.
The core concept of the Verifier visualisation toolbox can be visualised as follows.
The diagram shows various types of data, sets, etc, on the left, which are each linked by a guiding framework to appropriate image features on the right. So, for example, if you want to represent ratio measures, you could use hue, or saturation, or chroma, since all of these map directly onto a ratio representation. The full set of possible combinations is large, but the appropriate mappings are usually easy to identify after the core concept has been grasped.
There’s an in-depth account of Verifier, including more examples from the visualisation toolbox, in my book with Joe D’Agnese, Blind Spot:
Here are some examples of how we’ve applied the visualisation framework.
Example 1: Fuzzy measures
Empirical analysis often uses measures that are fuzzy, rather than clear-cut. A classic example is statistical probability. This is a notoriously difficult concept to explain to non-statisticians, and is a particularly important issue when forensic evidence is being explained to juries in court. We are working with forensics researchers on ways of applying the framework above to forensic evidence.
For example, a common problem arises when explaining to a jury that the most likely time of death is, say, 10.00, with a probability of x% that the death occurred between 8.30 and 11.30. Juries often misinterpret or misremember this to mean that the death actually did occur between those times.
The figure below shows how this can be tackled by showing the probable time of death using a fuzzy scale, where the darkness at any given point on the scale corresponds directly to the statistical probability that death occurred at that point. For instance, if there is a 16% probability that death occurred at a particular time, then the greyscale value at that point in the diagram will be 16%. This representation shows that there’s a gradual decrease in likelihood as we move away from the centre point, but that there’s no single definitive cut-off point in either direction.
The same principle can also be applied in more than one dimension – for example, we are working with a colleague at Keele University on applying this to data involving oceanographic samples, where one axis of the diagram will show the estimated age of the samples, and the other axis will show the estimated sea level when the samples were formed, producing a fuzzy oval to represent each data point.
Example 2: Representing ambivalent responses
A well-recognised problem when using Likert-style scales involves the nature of the anchor points at the end of the scale, such as whether a scale should be anchored with a negative value at one end. For example, what are the implications of having a scale running from “Strongly disagree” to “Strongly agree” as opposed to having a scale running from “0” to “7”?
Although this is well recognised in the literature, it is often overlooked by pollsters. The example and illustration below show why this distinction is important.
Suppose that a controversial new policy has been proposed, and a survey has been conducted to measure public reaction to it. The survey uses a scale running from “Strongly disagree” to “Strongly agree”. The overall reaction turns out to be surprisingly neutral, with most responses being in the middle of the scale.
Now imagine that another survey of the same issue is conducted, which uses two scales. One scale asks respondents how much they agree with features of the new policy. The second scale asks the same respondents how much they disagree with features of the new policy.
Here’s the sort of insight that you could get from plotting the results from the second survey as a scatter plot.
This representation captures the ambivalence of many responses. Most respondents have strong, mixed feelings about the proposed policy, while only a few have no strong feelings about it. This distribution of attitudes would have been missed by using only a single scale, which would have forced both sets of respondents to use the middle point of the scale, thereby seriously distorting the representation of their actual attitudes.
The same principle of using two scales running from zero to a high value, rather than from a negative to a positive value, has a wide range of applications. For example, it can be applied to sexual orientation, by asking respondents on one scale to rate how strongly they are sexually attracted to men, and on the second scale to rate how strongly they are sexually attracted to women; this approach gives a much more insightful way of representing sexual preferences than is offered by a single scale such as the original Kinsey scale.
Example 3: Representing category membership
The next example involves handling a mixture of greyscale measures and clear-cut “crisp” categories. Often, discussions are confused by misunderstandings about what the participants are asserting. A common case is the “slippery slope” fallacy, where someone asserts that the only options are either a binary categorisation or else a greyscale that offers no clear-cut division between right and wrong.
In reality, there are numerous ways in which an area can be divided into more than two categories. The illustration below shows how this can be applied to definitions of gender; the same principle can also be applied to definitions of sex, and to other related concepts, but for brevity, we’ll focus on gender.
One widespread view is that there is a clear-cut binary division between “male” and “female”. That can be represented by using two boxes of different colours, with a crisp dividing line between them.
Another way of viewing gender is as a greyscale. There are various possible anchor points for the two ends of the greyscale; the “slippery slope” argument would probably use “male” and “female” as the two anchor points.
A third view might be that some people are definitely “male” and others are definitely “female” while others are on a greyscale in between; for example, people who are born intersex.
Another view might be that some people are definitely “male” and some are definitely “female” while some others are in fuzzy greyscale intermediate categories, and others again are in non-greyscale, crisply defined intermediate categories. In the illustration below, the crisply defined intermediate category is shown with a checkerboard pattern.
Yet another view is that this should not be viewed as a scale with categories intermediate between “male” and “female”; instead, it should be viewed as a collection of categories that don’t form a scale. There are various ways of representing this, such as the illustration below.
The key point is not which of these is the correct way of representing gender; the key point is that this approach provides a way of clearly showing which categorisation a person is using, and thereby reducing the risk of misunderstandings and of arguing at cross-purposes.
This type of visualisation can also be applied to a wide range of other areas beyond gender, such as religion, or political affiliation.
At a more advanced level, the same principle of visualisation can be applied to each of the individual defining attributes within a portmanteau category, i.e. a category defined in terms of several attributes, each of which might be crisp or fuzzy.
Example 4: Translating between serial processing and parallel processing – Search Visualizer
Information can be processed in a serial, step-by-step fashion, or in a parallel, many-things-at-once way. This distinction is an important one that in many fields has received nowhere near the amount of attention it deserves. Serial processing is invaluable for calculations and logical operations, and is at the heart of how most computers operate. Parallel processing is invaluable for handling visual information, and is particularly useful for identifying patterns and objects; it’s something that most computers do very badly. Humans, on the other hand, are very good indeed at parallel processing and at pattern matching.
Ed de Quincey and I applied the distinction between serial and parallel processing to online search, where a major bottleneck in traditional search occurs when the search engine hands over a huge batch of records to the user to assess for relevance. If you assess their relevance by trying to read the records, you’re using serial processing, which is slow; even with “search in text” functions, you’ll be lucky to handle beyond a few hundred words per minute.
What we did with our Search Visualizer software was to translate the records into a visual format that users could handle via parallel processing. This is hugely more swift and efficient; it not only allows users to find relevant records much more quickly and easily, but it also allows users to analyse structures within texts in new ways.
The illustration below shows part of a Search Visualizer image for a search about medication for bradycardia (slow heart rate).
The image shows four records found on a standard Bing search for those terms. Each of the columns shows one record; each square within a column represents a word, with colour-coding for the keywords, so that wherever the word “bradycardia” occurs it is shown as a red square, and wherever the word “medication” occurs, it is shown as a green square (or darker and lighter shades, for readers who are colour blind). It’s like a miniaturised version of each document, with coloured highlighter applied wherever the keywords occur.
The first record contains numerous mentions of both words, in three clearly distinct bands. The second record contains numerous mentions of both words, but with medication mentioned only in the first and the final quarters of the document. The third has a few mentions of medication near the start, and a few towards the end; mentions of bradycardia are almost entirely confined to the first half of the document, and there is a large chunk of the document that contains no mentions of either term. The fourth document has some mentions of bradycardia and of medication in the first half, and a couple of mentions of medication in the second half.
All four records look relevant, but the first one has an odd distribution pattern for the keywords, both in terms of the three-band clustering, and also in terms of how often the keywords occur very near each other. This turns out to be because that record is an index page that contains links to other pages.
So, with Search Visualizer we can swiftly assess how relevant records are likely to be without needing to read the text, and we can also spot odd patterns in the text. What’s particularly interesting about this example is that the original search was actually in German, for “bradykardie” and “medikament”. This has far-reaching implications for researchers wanting to find relevant records in foreign-language literatures.
We’ve applied Search Visualizer to a range of texts and topics, including gendered language in Shakespeare, and literary structures in the Bible. Since Search Visualizer can significantly compress the on-screen size of a text, it’s possible to view substantial texts on a single screen, which has a lot of advantages for anyone analysing texts and transcripts. There are examples of this on the Search Visualizer blog site; there’s a free online version of the software on the main Search Visualizer site.
Summary and conclusion
This article gives a brief overview of the visualisation toolkit within Verifier, with some examples of how we’ve applied that toolkit to a range of fields. We’ll be publishing more about this in later articles.
One of the earliest uses of this systematic approach to visualisation is: Bertin, J. (1974). Sémiologie Graphique . (G. Deutsche Übersetzung von Jensch, D. Schade, & W. G. Scharfe, Eds.). There is a more recent translation into English:
Another earlier framework using the same underlying approach is: Mackinlay, J. (1986). Automating the design of graphical presentations of relational information. ACM Transactions on Graphics , 5 (2), 110-141.
A classic set of guidelines for visualising descriptive statistics is in Huff’s brilliant book, How to lie with statistics.
Another classic text is Tufte: Tufte, E.R. (2001). The Visual Display of Quantitative Information (2nd edition). Graphics Press: USA.
Our work on using pairs of scales was inspired by Sandra Bem’s work on androgyny theory: Bem, S.L. (1974). ‘The measurement of psychological androgyny’. Journal of consulting and clinical psychology 42 pp 155-162.
The Search Visualizer website is here: http://www.searchvisualizer.com
The Search Visualizer blog site, which contains articles demonstrating various ways of using the software, is here: http://searchvisualizer.wordpress.com/
There’s an article by David Musgrave and myself about textual analysis of ancient texts here: http://www.bibleinterp.com/articles/2013/rug378029.shtml
All images in this article are copyleft Hyde & Rugg Associates; you’re welcome to use them for any non-commercial purpose, including academic lectures, provided that you retain this copyleft
Pingback: Our main posts: An overview by topic | hyde and rugg
Pingback: One hundred Hyde & Rugg articles, and the Verifier framework | hyde and rugg
Pingback: Tacit and semi tacit knowledge: Overview | hyde and rugg