Many problems in life are caused by misunderstandings. Misunderstandings take various forms. These forms themselves are, ironically, often misunderstood.
In this article, I’ll look at ways of representing misunderstandings visually, to clarify what is going wrong, and how to fix it.
I’ll use a positive/negative plot to show the different forms of misunderstanding. This lets you locate a given statement in terms of how positive it is, and how negative it is, as in the image below. This format is particularly useful for representing mixed messages, which are an important feature of many misunderstandings. There’s more about versions of this format here and here.
In a previous article, I looked at the belief structures of the archetypal “crazy uncle” worldview. It’s a worldview with a strong tendency towards whichever option requires the minimum short-term cognitive load; for example, binary yes/no categorisations rather than greyscales or multiple categories.
One theme I didn’t explore in that article, for reasons of space, was premature closure. This article picks up that theme.
Premature closure is closely related to pre-emptive categorisation, which I’ll also discuss in this article. Both these concepts have significant implications, and both involve minimising short-term cognitive load, usually leading to problems further down the road. Both tend to be strongly associated with the authoritarian worldview, for reasons that I’ll unpack later.
So, what is premature closure? In brief, it’s when you make a decision too early, closing down prematurely the process of search and evaluation and decision-making. This takes various forms; knowing these forms improves your chances of stopping at the right point. For clarity, I’ll use examples within the context of common “crazy uncle” arguments.
Other people frequently appear to behave like idiots. As is often the case, there’s a simple explanation for this, and as is also often the case, the full story is a bit more complex, but still manageably simple once you get your head round it.
First, the simple explanation. People usually behave in a way that looks idiotic for one or more of the three following reasons:
This model comes from the literatures on human error and on safety-critical systems; there are variations on the wording and on some of the detail (particularly around slips) but the core concepts are usually the same.
Sin (or “violations” in the more common version of the name) involves someone deliberately setting out to do the wrong thing. I’ll return later to possible reasons for people doing this.
Error involves people having mistaken beliefs; for example, they believe that closing a particular valve will solve a particular problem.
Slips involve someone intending to do one thing, but unintentionally doing something different; for example, intending to press the button on the left, but accidentally pressing the button on the right.
It’s a fair question, if it’s being asked as a question, rather than as a complaint about the cosmic unfairness of having to study a topic that you don’t see the point of. Sometimes, it’s easy to answer. For instance, if someone wants to be a doctor, then checking their knowledge of medicine is a pretty good idea.
Other times, though, the answer takes you into deep waters that you’d really rather not get into, especially if there’s a chance of some student recording your answer and posting it onto social media…
Why do some answers take you into deep waters? That’s the topic of this article. It takes us into history, politics, proxies, and the glass bead game.
There’s a widely used concept called the 80:20 Principle, or the Pareto Principle, named after the decision theorist who invented it. It’s extremely useful.
In brief, across a wide range of fields, about 80% of one thing will usually come from 20% of another.
In business, for example, 80% of your revenue will come from 20% of your customers. In any sector, getting the first 80% of the job done will usually take about 20% of the resources involved; getting the last 20% of the job done will usually be much harder, and will take up 80% of the resources. The figure won’t always be exactly 80%, but it’s usually in that area. Good managers are very well aware of this issue, and keep a wary eye out for it when planning.
Here’s a diagram showing the principle. It’s pretty simple, but very powerful. However, that doesn’t mean that it’s perfect. It can actually be developed into something richer and more powerful, which is what we’ll describe in this article.
It’s a splendid example of nineteenth century ingenuity, right down to the name. What does it do? It’s intended to let you know if a storm is approaching. The way it does this is as wonderfully nineteenth century as the name. The Prognosticator is operated by twelve leeches, each of which lives in a bottle. When storms are approaching, the leeches become agitated, and climb out of the bottle. When they climb out, they disturb a piece of whalebone, which activates a bell. The more serious the risk of storm, the more leeches climb out, and the more bells ring.
The best examples of powerful principles often come from unexpected places.
Today’s article is one of those cases. It’s about why it often makes excellent sense to use a particular method, even when you’re fully aware that the method doesn’t work as advertised on the box, or doesn’t work at all.
It’s a story that starts with one of the most widely misunderstood cards in the Tarot pack. The story also features some old friends, in the form of game theory, pattern matching and script theory. It ends, I hope, with a richer understanding of why human behaviour often makes much more sense when you look at the deep underlying regularities, rather than at the surface appearances.
So, we’ll start with Tarot cards. A surprisingly high proportion of people who use Tarot packs will cheerfully tell you that the cards have no mystical powers. Why would anyone use Tarot cards if they don’t have those powers? There are actually some very good reasons.
Sources for images are given at the end of this article