By Gordon Rugg
This article is a short introduction to some basic principles involved in representing argumentation, evidence and/or chains of reasoning using systematic diagrams.
This approach can be very useful for clarifying chains of reasoning, and for identifying gaps in the evidence or in the literature.
As usual, there’s an approach that looks very similar, but that is actually subtly and profoundly different, namely mind maps. That’s where we’ll begin.
Mind maps are widely used, and are a simple way of showing links between concepts. In the mind map above, item X is linked to items A, B, C and D.
How are they linked? The mind map doesn’t say.
How strongly are they linked? The mind map doesn’t say.
This mind map is useful for showing what is connected to what, and for giving you an idea of how many things you’re dealing with, but that’s as far most mind maps go. If you want to do something more powerful, then you need a representation that does more, like the one below.
Ways of showing more information in the links
The diagram below shows four ways of adding more information:
- Arrows (as opposed to ordinary lines)
- Line thickness
- Colour
- Line type (continuous or dashed)
Arrows are useful for showing the directions of association. For instance, “A causes X” is very different from “X causes A”.
Line thickness is useful for showing the strength of associations. Strength can take various forms. For instance, it might mean the likelihood of an association, or the number of peer reviewed journal articles about the association.
Line colour and line type are both useful for showing the nature of associations. For instance, red lines might be used to show associations that have medical implications, or dashed lines might be used to show postulated associations for which there is currently no evidence.
The key principle is that whichever of these you use, you need to use it completely consistently within the diagram. That may sound obvious, but a surprising number of people don’t pay much attention to consistency in their diagrams, producing a rich crop of confusion for everyone involved. It’s a good idea to include a key with the diagram saying explicitly which notations have which meaning.
In this article, I’ll use the worked example of a diagram to show how much research has been done into the various possible causes of a condition. This keeps the issues fairly simple. In a later article, I’ll look at ways of showing more complex issues, such as the strength of evidence for and against competing ideas.
One issue that you very soon encounter with this example is the problem of showing the numbers involved with reasonable accuracy. For instance, if there’s only one article about one of the possible causes, but there are several hundred articles about another possible cause, then using line thickness to show the weight of evidence isn’t going to work well. You’d either need infeasibly thick lines for the heavily-researched causes, or you’d need to use a set of conventions for line thickness that would be misleading.
I’ll unpack that last point, since it’s an important one. As far as the human visual system is concerned, a line that’s three times thicker means that the strength of association is about three times larger. If you’re using a convention that a line thickness of 3 means that there are hundreds of articles, and a line thickness of 4 means that there are thousands of articles, then yes, you’re using a convention that is internally consistent, but that doesn’t mean that it’s consistent with what the human visual system will be trying to tell the person viewing the diagram.
An obvious solution is to add numbers to the lines joining the boxes, like this.
That may look very sensible and logical, but again, it runs into problems with the human visual system. One problem is that the visual system will stubbornly perceive a two digit number as being twice as big as a one digit number; yes, that will be over-ridden further on, but it’s still a complication. Another, related issue is that by asking the viewer to read the numbers, you’re asking them to switch from fast, efficient parallel processing into slow, inefficient serial processing; you’re also requiring the viewer to use their limited-capacity working memory to store and compare the numbers.
A more efficient and elegant solution is to present the same information in a format that fits with the strengths and weaknesses of the human visual system, such as this format.
In this example, the numbers are still visible, but the circles containing the numbers each have a colour intensity that corresponds directly to the number shown within the circle. The circle between D and X, for instance, contains the number 1, and has an intensity of 1; the circle between A and X contains the number 53, and has a colour intensity of 53. (Technical note: For this example, I’ve used the PowerPoint graphics transparency function, which comes on a percentage scale.)
Causes of causes
This approach is useful for simple representations like the ones above. It’s even more useful when dealing with more complex representations, such as tracing chains of causation across multiple links and across multiple disciplines.
Here’s an example. The diagram below shows proposed causes of X, and the proposed causes of those causes. It deliberately doesn’t show the proposed strength of association, or weight of evidence; instead, it’s focusing on the pattern of evidence.
In this diagram, X has four proposed causes.
There are three proposed causes for A; there are two proposed causes for B; there is one proposed cause for C; and there is no proposed cause for D. It looks as if the evidence on this topic is lopsided, with A and B being better understood than C and D.
What often happens, however, is that the chain of causality takes you out of one discipline and into another. Quite often, something that looks like a dead end within one literature joins up with a rich and well understood network of knowledge in another literature.
We can represent that by colouring the boxes to show the literature where each box originates, as in the diagram below.
In this diagram, another literature contains another cause for C, and a cause for D, which was previously a dead end. Both these new causes are shown in light green.
The next diagram shows why this new evidence can be disproportionately useful. I’ve made the simplifying assumption that causes A, B, C and D are all equally important with regard to X.
In this diagram, I’m using line thickness to show the importance of a causal association, and broken lines to show speculative associations. In this hypothetical example, the two new causes shown in light green boxes are much more important than any of the other causes shown. (I’ve deliberately not unpacked “important,” for simplicity.)
Discussion and closing thoughts
Formal diagrams are used in numerous disciplines.
One constant theme across those disciplines is the need to be completely consistent within a diagram, with regard to what each element in the diagram means. If you don’t do this, it’s a recipe for chaos.
Another constant theme is the tension between the diagram as a functional tool and the diagram as an illustration. If diagrams are re-worked by professional illustrators for publication, for instance, this often leads to problems when the illustrator changes something in the diagram for purely aesthetic reasons, not realising that it changes the meaning of the diagram.
A deeper recurrent theme is the purpose of the diagram. In computing, for instance, there’s a long-running debate about whether a formal diagram is more useful as a product, where you produce the diagram, and then keep using it at key points, or as a process, where you use the process of creating the diagram to identify and clarify the key issues, and then discard it, because it’s done its job.
One theme that’s received less attention is the role of the human visual system in designing diagrams consistently. I’ve touched on this topic above, with regard to how the human visual system perceives line thickness. It’s an important topic, with significant implications for problems such as explaining technical concepts to the general public, or communicating across research disciplines. The underlying principles are well understood in relevant fields – for instance, some of the key concepts were mapped out by Fechner and by Weber over a century ago. However, these principles haven’t yet become part of standard practice in formal diagrams and notations in all fields.
In a later article, I’ll look at ways of using formal diagrams to represent more complex forms of argumentation, and to represent the quality and nature of evidence.
Notes
You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.
There’s more about the theory behind this article in my latest book, Blind Spot, by Gordon Rugg with Joseph D’Agnese
http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903
Related articles:
The diagrams in this article are based on graph theory, and in particular on graph colouring and graph labelling:
https://hydeandrugg.wordpress.com/2013/05/30/an-introduction-to-graph-theory/
https://en.wikipedia.org/wiki/Graph_theory
https://en.wikipedia.org/wiki/Graph_coloring
https://en.wikipedia.org/wiki/Graph_labeling
Parallel processing and serial processing:
Assessing evidence and reasoning:
https://hydeandrugg.wordpress.com/2014/07/05/logic-evidence-and-evidence-based-approaches/
Pingback: Trees, nets and teaching | hyde and rugg
Pingback: Literature reviews | hyde and rugg
Pingback: The Knowledge Modelling Book | hyde and rugg
Pingback: Why Hollywood gets it wrong, part 4 | hyde and rugg