It’s logic, Jim, but not as we know it: Associative networks and parallel processing

By Gordon Rugg

A recurrent theme in our blog articles is the distinction between explicit knowledge, semi-tacit knowledge and tacit knowledge. Another recurrent theme is human error, in various forms. In this article, we’ll look at how these two themes interact with each other, and at the implications for assessing whether or not someone is actually making an error. We’ll also re-examine traditional logic, and judgement and decision-making, and see how they make a different kind of sense in light of types of knowledge and mental processing. We’ll start with the different types of knowledge.

Explicit knowledge is fairly straightforward; it involves topics such as what today’s date is, or what the capital of France is, or what Batman’s sidekick is called. Semi-tacit knowledge is knowledge that you can access, but that doesn’t always come to mind when needed, for various reasons; for instance, when a name is on the tip of your tongue, and you can’t quite recall it, and then suddenly it pops into your head days later when you’re thinking about something else. Tacit knowledge in the strict sense is knowledge that you have in your head, but that you can’t access regardless of how hard you try; for instance, knowledge about most of the grammatical rules of your own language, where you can clearly use those rules at native-speaker proficiency level, but you can’t explicitly say what those rules are. Within each of these three types, there are several sub-types, which we’ve discussed elsewhere.

So why is it that we don’t know what’s going on in our own heads, and does it relate to the problems that human beings have when they try to make logical, rational decisions? This takes us into the mechanisms that the brain uses to tackle different types of task, and into the implications for how people do or should behave, and the implications for assessing human rationality.

Continue reading

Tacit knowledge: Can’t and won’t

By Gordon Rugg and Sue Gerrard

This is the third post in a short series on semi-tacit and tacit knowledge. The first article gave an overview of the topic, structured round a framework of what people do, don’t, can’t or won’t tell you. The second focused on the various types of do (explicit) and don’t (semi-tacit) knowledge. Here, we look at can’t (strictly tacit) and won’t knowledge.

The issues involved are summed up in the diagram below.

Continue reading

Explicit and semi-tacit knowledge

By Gordon Rugg and Sue Gerrard

This is the second in a series of posts about explicit, semi-tacit and tacit knowledge.

It’s structured around a four way model of whether people do, don’t, can’t or won’t state the knowledge. If they do state it, it is explicit knowledge, and can be accessed via any method. If people don’t, can’t or won’t state the knowledge, then it is some form of semi-tacit or strictly tacit knowledge, which can only be accessed via a limited set of methods such as observation, laddering or think-aloud.

This is summed up in the image below.

The previous article in this series gave an overview. In the present article, we focus on do and don’t knowledge, i.e. explicit and semi-tacit knowledge.

Continue reading

Modeling misunderstandings

By Gordon Rugg

Many problems in life are caused by misunderstandings. Misunderstandings take various forms. These forms themselves are, ironically, often misunderstood.

In this article, I’ll look at ways of representing misunderstandings visually, to clarify what is going wrong, and how to fix it.

I’ll use a positive/negative plot to show the different forms of misunderstanding. This lets you locate a given statement in terms of how positive it is, and how negative it is, as in the image below. This format is particularly useful for representing mixed messages, which are an important feature of many misunderstandings. There’s more about versions of this format here and here.

Continue reading

Premature closure and authoritarian worldviews

By Gordon Rugg

In a previous article, I looked at the belief structures of the archetypal “crazy uncle” worldview. It’s a worldview with a strong tendency towards whichever option requires the minimum short-term cognitive load; for example, binary yes/no categorisations rather than greyscales or multiple categories.

One theme I didn’t explore in that article, for reasons of space, was premature closure. This article picks up that theme.

Premature closure is closely related to pre-emptive categorisation, which I’ll also discuss in this article. Both these concepts have significant implications, and both involve minimising short-term cognitive load, usually leading to problems further down the road. Both tend to be strongly associated with the authoritarian worldview, for reasons that I’ll unpack later.

So, what is premature closure? In brief, it’s when you make a decision too early, closing down prematurely the process of search and evaluation and decision-making. This takes various forms; knowing these forms improves your chances of stopping at the right point. For clarity, I’ll use examples within the context of common “crazy uncle” arguments.

Continue reading

Why do people behave like idiots?

By Gordon Rugg

Other people frequently appear to behave like idiots. As is often the case, there’s a simple explanation for this, and as is also often the case, the full story is a bit more complex, but still manageably simple once you get your head round it.

First, the simple explanation. People usually behave in a way that looks idiotic for one or more of the three following reasons:

  • Sin
  • Error
  • Slips

This model comes from the literatures on human error and on safety-critical systems; there are variations on the wording and on some of the detail (particularly around slips) but the core concepts are usually the same.

  • Sin (or “violations” in the more common version of the name) involves someone deliberately setting out to do the wrong thing. I’ll return later to possible reasons for people doing this.
  • Error involves people having mistaken beliefs; for example, they believe that closing a particular valve will solve a particular problem.
  • Slips involve someone intending to do one thing, but unintentionally doing something different; for example, intending to press the button on the left, but accidentally pressing the button on the right.

Continue reading

Mental models, and making sense of crazy uncles

By Gordon Rugg

The crazy uncle is a well-established and much-dreaded part of Western culture. There’s probably a very similar figure in other cultures too, but in this article, I’ll focus on the Western one, and on what is going on in his head.

Why are crazy uncles permanently angry, and keen to inflict their opinions, prejudices and conspiracy theories on other people? Some parts of the answer are already well covered in popular media and in specialist research, but other parts are less well known.

In this article, I’ll give a brief overview of the better known elements, and then combine them with insights from knowledge modeling, and see what sort of answer emerges.

Continue reading

Mental models and metalanguage: Putting it all together

By Gordon Rugg

The previous articles in this series looked at mental models and ways of making sense of problems. A recurrent theme in those articles was that using the wrong model can lead to disastrous outcomes.

This raises the question of how to choose the right model to make sense of a problem. In this article, I’ll look at the issues involved in answering this question, and then look at some practical solutions.

Continue reading

Mental models, worldviews, Meccano, and systems theory

By Gordon Rugg

The previous articles in this series looked at how everyday entities such as a cup of coffee or a Lego pack can provide templates for thinking about other subjects, particularly abstract concepts such as justice, and entities that we can’t directly observe with human senses, such as electricity.

The previous articles examined templates for handling entities that stay where they’re put. With Lego blocks or a cup of coffee, once you’ve put them into a configuration, they stay in that configuration unless something else disturbs them. The Lego blocks stay in the shape you assembled them in; the cup of coffee remains a cup of coffee.

However, not all entities behave that way. In this article, I’ll examine systems theory, and its implications for entities that don’t stay where they’re put, but instead behave in ways that are often unexpected and counter-intuitive. I’ll use Meccano as a worked example.

Continue reading

Mental models, worldviews, and mocha

By Gordon Rugg

Mental models provide a template for handling things that happen in the world.

At their best, they provide invaluable counter-intuitive insights that let us solve problems which would otherwise be intractable. At their worst, they provide the appearance of solutions, while actually digging us deeper into the real underlying problem.

In this article, I’ll use a cup of mocha as an example of how these two outcomes can happen. I’ll also look at how this relates to the long-running debate about whether there is a real divide between the arts and the sciences as two different cultures.

Continue reading