By Gordon Rugg
Other people frequently appear to behave like idiots. As is often the case, there’s a simple explanation for this, and as is also often the case, the full story is a bit more complex, but still manageably simple once you get your head round it.
First, the simple explanation. People usually behave in a way that looks idiotic for one or more of the three following reasons:
- Sin
- Error
- Slips
This model comes from the literatures on human error and on safety-critical systems; there are variations on the wording and on some of the detail (particularly around slips) but the core concepts are usually the same.
- Sin (or “violations” in the more common version of the name) involves someone deliberately setting out to do the wrong thing. I’ll return later to possible reasons for people doing this.
- Error involves people having mistaken beliefs; for example, they believe that closing a particular valve will solve a particular problem.
- Slips involve someone intending to do one thing, but unintentionally doing something different; for example, intending to press the button on the left, but accidentally pressing the button on the right.
These issues can and do interact with each other; for instance, someone may be planning to do the wrong thing because of an error, but may fail to do that wrong thing because of a slip. In this hypothetical example, the slip may cancel out the error, so that everything ends well, though only by accident. In other cases, the slip may lead to the error having worse effects.
So far, so good. It’s a clear, simple and powerful model that performs well in the real world. When you dig a bit deeper, though, things become more complex before becoming fairly simple again. I’ll start by flipping the question round, which exposes the core problem.
Flipping the question round: What would constitute “correct” behaviour?
Traditional logic is very good at showing how badly normal human beings perform at traditional logic. It’s also very good at working systematically through the consequences of a given set of starting assumptions.
However, it’s not very good at giving solutions for most problems in everyday life. Here’s an example. I was recently driving to work when I heard an alarmingly loud noise that appeared to be coming from my car’s engine. After a few seconds, the noise stopped as suddenly as it had appeared. What would traditional logic tell me to do in that situation?
Traditional logic wouldn’t be able to help much. The core issues in this case involved weighing up the likelihoods and the possible severities of the likely outcomes, such as the risk of the car breaking down on a country road in the middle of nowhere, and the risk of missing a significant meeting at work. They also involved weighing up the likelihood of other explanations for the noise, such as it possibly being a power tool in use on the nearby industrial estate. This in turn led into an endless regress of follow-on questions, such as how likely it was that someone would be using a power tool at 7.30 in the morning.
This is a long way from the “If this is true and that is true then this other thing is true” format of traditional logic, which doesn’t deal in likelihoods. Instead, this is much more like fuzzy logic, which wasn’t invented until well within living memory, or like Bayesian reasoning, which is still hotly debated by logicians and statisticians a couple of centuries after its invention.
So, traditional logic is unable to give guaranteed correct solutions to the majority of real world problems, and therefore people can’t follow a set of guidelines which will guarantee that they never make the wrong decision. In one way, that’s not very encouraging. In another way, it’s reassuring, since you don’t need to feel so bad next time you get something wrong. Probably…
Working through the complexity for sin, error and slips
Why do people commit sin, in the sense of deliberately not doing what they are supposed to do? (I’ll leave the theological sense of “sin” for another day…)
There are two main reasons:
- Believing that what they are being expected to do is either morally wrong, or is factually mistaken
- Self-interest: Believing that they personally will be better off as a result of not doing what they are expected to do
With error, there is the central question of whether we can reasonably say that a particular belief is correct. This raises a variety of issues. I’ll focus on two, which between them cover most of the key points.
One is bounded rationality. For most non-trivial questions, you could spend literally years trying to gather all the relevant information before making a decision. There isn’t enough time to do this for every question we encounter in everyday life, so we have to make a best guess from the information that we are able to gather in the time available. Sometimes, that information is misleading, so we end up making decisions based on plausible but mistaken grounds. Sometimes, we will never be able to gather all the relevant information, so we’ll never have a definitive answer.
The second issue is demonstrably mistaken beliefs about reality. These are a staple for YouTube “fail” compilations, such as “getting off a boat” fails where people have a mistaken mental model of Newton’s laws of motion. Where these mistakes involve easily visible immediate results, they’re easy to detect. However, when they are not immediately visible, they’re more likely to persist.
A classic example is the belief that subsystem optimisation necessarily leads to system optimisation; in other words, improving a part will also improve the whole. This belief is demonstrably wrong; for instance, putting a more powerful (“better”) engine in a car won’t necessarily make the car as a whole better, if the brakes aren’t able to handle the increased power. However, because the consequences of this belief aren’t immediately and obviously visible as mistakes, it’s a persistent and widespread error.
Finally, slips. A classic example of a slip is strong but wrong, where in a moment of distraction you do the thing you most often do, rather than the thing you should do. For instance, if you use your office key numerous times each day, but only use your front door key a couple of times each day, you’ll sometimes find yourself trying to open your front door with your office key, but you’ll hardly ever find yourself trying to open your office key with your front door key.
Conclusion
The sin/error/slip model makes sense of a lot of apparently idiotic behaviour. Other common issues such as miscommunications and different value systems can also arguably be handled by that model.
What can we do to improve the likelihood of people behaving less like idiots? Happily, there are various reasonably effective solutions.
- One involves representations. For example, you can represent the possible outcomes of a decision as a table or as a flowchart, to check whether you’ve missed something significant.
- Another involves design: For instance, designing a product so as to block likely errors by making an action physically impossible.
- A third involves turning human biases to the good, by using them to make training more memorable via e.g. using vivid examples in safety lectures, making use of the human bias towards vivid examples being more memorable.
A closing thought: Sometimes, people behave in a way that looks idiotic to you, because actually you’re the one who’s wrong. Bearing that in mind before you judge others has the dual advantages of making you look wiser, and of making you less likely to be in the wrong yourself. I’ll end on that positive note…
Notes, references and links
You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.
There’s more about the theory behind this article in my latest book: Blind Spot, by Gordon Rugg with Joseph D’Agnese
http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903
You might also find our website useful: http://www.hydeandrugg.com/
Fascinating post.
One question.
“for example, they believe that closing a particular valve will solve a particular problem.”
Surely if the person believes that closing the valve will solve the problem, then to close the valve in order to solve the problem they are not acting like an idiot. Indeed, not to close the valve in such circumstances would be acting like an idiot.
Are you suggesting that they appear to others to be acting like an idiot rather than actually acting like an idiot?
Just wondering as this would appear to be key to the article.
Thanks for catching this. Yes, I meant that to other people who were unaware of that person’s belief, the person would appear to be acting like an idiot. However, given what that person believes, the action would be perfectly reasonable (though erroneous).
Thanks, an interesting article.
It reads as though each of the three reasons are based on the subject. Are there cases where the root of the error is contextual, and would these be considered differently? For example, closing that particular valve has consistently worked in the past, but a change in the environment prevents it working this time. I guess you could classify this as a case of ‘error’, but I wondered if errors in belief (internal) and errors in environment (external) would be considered differently?
Good point. There are quite a few formal and informal taxonomies of error in the error literature and the disaster literature, as well as models (as opposed to taxonomies), and I’m pretty sure that your point is addressed in at least one of those. If you feel like having a look through them, they’re interesting, but typically complex, which is why I blogged about this three level model, as something which is very simple, but very powerful considering its size.