Games people play, and their implications

By Gordon Rugg

There are regularities in how people behave. There are numerous ways of categorising these regularities, each with assorted advantages and disadvantages.

The approach to categorisation of these regularities that I’ll discuss in this article is Transactional Analysis (TA), developed by Eric Berne and his colleagues. TA is designed to be easily understood by ordinary people, and it prefers to use everyday terms for the regularities it describes.

I find TA fascinating and tantalising. On the plus side, it contains a lot of powerful insights into human behaviour; it contains a lot of clear, rigorous analysis; it has very practical implications. On the negative side, it doesn’t make use of a lot of well-established methods and concepts from other fields that would give it a lot more power. It’s never really taken off, although it has a strong popular following.

An illustrative example of why it’s fallen short of its potential is the name of Berne’s classic book on the topic, Games People Play. When you read the book, the explanation of the name makes perfect sense, and the deadly seriousness of games becomes very apparent. However, if you don’t read the book, and you only look at the title, then it’s easy to assume that the book and the approach it describes are about trivial passtimes, rather than core features of human behaviour.

In this article, I’ll look at some core concepts of Transactional Analysis, and how they give powerful insights into profoundly serious issues in entertainment, in politics, and in science.

The image below shows games at their most deadly serious; Roman gladiatorial combat, where losing could mean death.

By Unknown author – Livius.org, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3479030

Continue reading

Advertisement

Modeling misunderstandings

By Gordon Rugg

Many problems in life are caused by misunderstandings. Misunderstandings take various forms. These forms themselves are, ironically, often misunderstood.

In this article, I’ll look at ways of representing misunderstandings visually, to clarify what is going wrong, and how to fix it.

I’ll use a positive/negative plot to show the different forms of misunderstanding. This lets you locate a given statement in terms of how positive it is, and how negative it is, as in the image below. This format is particularly useful for representing mixed messages, which are an important feature of many misunderstandings. There’s more about versions of this format here and here.

Continue reading

Premature closure and authoritarian worldviews

By Gordon Rugg

In a previous article, I looked at the belief structures of the archetypal “crazy uncle” worldview. It’s a worldview with a strong tendency towards whichever option requires the minimum short-term cognitive load; for example, binary yes/no categorisations rather than greyscales or multiple categories.

One theme I didn’t explore in that article, for reasons of space, was premature closure. This article picks up that theme.

Premature closure is closely related to pre-emptive categorisation, which I’ll also discuss in this article. Both these concepts have significant implications, and both involve minimising short-term cognitive load, usually leading to problems further down the road. Both tend to be strongly associated with the authoritarian worldview, for reasons that I’ll unpack later.

So, what is premature closure? In brief, it’s when you make a decision too early, closing down prematurely the process of search and evaluation and decision-making. This takes various forms; knowing these forms improves your chances of stopping at the right point. For clarity, I’ll use examples within the context of common “crazy uncle” arguments.

Continue reading

Mental models, and making sense of crazy uncles

By Gordon Rugg

The crazy uncle is a well-established and much-dreaded part of Western culture. There’s probably a very similar figure in other cultures too, but in this article, I’ll focus on the Western one, and on what is going on in his head.

Why are crazy uncles permanently angry, and keen to inflict their opinions, prejudices and conspiracy theories on other people? Some parts of the answer are already well covered in popular media and in specialist research, but other parts are less well known.

In this article, I’ll give a brief overview of the better known elements, and then combine them with insights from knowledge modeling, and see what sort of answer emerges.

Continue reading

Mental models, worldviews, and the span of consistency

By Gordon Rugg

In politics and religion, a common accusation is that someone is being hypocritical or inconsistent. The previous article in this series looked at how this can arise from the irregular adjective approach to other groups; for example, “Our soldiers are brave” versus “Their soldiers are fanatical” when describing otherwise identical actions.

Often, though, inconsistency is an almost inevitable consequence of dealing with complexity. Mainstream political movements, like organised religions, spend a lot of time and effort in identifying and resolving inherent contradictions within their worldview. This process takes a lot of time and effort because of the sheer number of possible combinations of beliefs within any mature worldview.

In this article, I’ll work through the implications of this simple but extremely significant issue.

Continue reading

Mental models, and the Other as dark reflection.

By Gordon Rugg

This article is the first in a series about mental models and their implications both for worldviews and for everyday behaviour. Mental models are at the core of how we think and act. They’ve received a lot of attention from various disciplines, which is good in terms of there being plenty of material to draw on, and less good in terms of clear, unified frameworks.

In these articles, I’ll look at how we can use some clean, elegant formalisms to make more sense of what mental models are, and how they can go wrong. Much of the classic work on mental models has focused on mental models of specific small scale problems. I’ll focus mainly on the other end of the scale, where mental models have implications so far-reaching that they’re major components of worldviews.

Mental models are a classic case of the simplicity beyond complexity. Often, something in a mental model that initially looks trivial turns out to be massively important and complex; there’s a new simplicity at the other side, but only after you’ve waded through that intervening complexity. For this reason, I’ll keep the individual articles short, and then look in more detail at the implications in separate articles, rather than trying to do too much in one article.

I’ll start with the Other, to show how mental models can have implications at the level of war versus peace, as well as at the level of interpersonal bigotry and harrassment.

The Other is a core concept in sociology and related fields. It’s pretty much what it sounds like. People tend to divide the world in to Us and Them. The Other is Them. The implications are far reaching.

The full story is, as you might expect, more complex, but the core concept is that simple. In this article, I’ll look at the surface simplicity, and look at the different implications of two different forms of surface simplicity.

It’s a topic that takes us into questions about status, morality, and what happens when beliefs collide with reality.

Continue reading

Crisp and fuzzy categorisation

By Gordon Rugg

Categorisation occurs pretty much everywhere in human life. Most of the time, most of the categorisation appears so obvious that we don’t pay particular attention to it. Every once in a while, though, a case crops up which suddenly calls our assumptions about categorisation into question, and raises uncomfortable questions about whether there’s something fundamentally wrong in how we think about the world.

In this article, I’ll look at one important aspect of categorisation, namely the difference between crisp sets and fuzzy sets. It looks, and is, simple, but it has powerful and far-reaching implications for making sense of the world.

I’ll start with the example of whether or not you own a motorbike. At first glance, this looks like a straightforward question which divides people neatly into two groups, namely those who own motorbikes, and those who don’t. We can represent this visually as two boxes, with a crisp dividing line between them, like this.

However, when you’re dealing with real life, you encounter a surprising number of cases where the answer is unclear. Suppose, for instance, that someone has jointly bought a motorbike with their friend. Does that person count as being the owner of a motorbike, when they’re actually the joint owner? Or what about someone who has bought a motorbike on hire purchase, and has not yet finished the payments?

Continue reading

Why Hollywood gets it wrong, part 2

By Gordon Rugg

The first article in this short series looked at one reason for movies presenting a distorted version of reality, namely conflict between conventions.

Today’s article looks at a reason for movies presenting a simplified version of reality. It involves reducing cognitive load for the audience, and it was studied in detail by Grice, in his work on the principles of communication. It can be summed up in one short principle: Say all of, but only, what is relevant and necessary.

At first sight, this appears self-evident. There will be obvious problems if you don’t give the other person all of the information they need, or if you throw in irrelevant and unnecessary information.

In reality, though, it’s not always easy to assess whether you’ve followed this principle correctly. A particularly common pitfall is assuming that the other person already knows something, and in consequence not bothering to mention it. Other pitfalls are subtler, and have far-reaching implications for fields as varied as politics, research methods, and setting exams. I’ll start by examining a classic concept from the detective genre, namely the red herring.

five red herrings bannerHerring image by Lupo – Self-made, based on Image:Herring2.jpg by User:Uwe kils, which is licensed {{GFDL}}, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2610685

Continue reading

Catastrophic success

By Gordon Rugg

Sometimes, you know a concept, but don’t know a name for it. I’m grateful to Colin Rigby for introducing me to a name for this article’s topic, namely catastrophic success.

It’s a concept that’s been around for a long time, in fields as varied as business planning and the original Conan the Barbarian movie. It’s simple, so this will be a short article, but it’s a very powerful concept, and well worth knowing about.

banner

Continue reading

Grand Unified Theories

By Gordon Rugg

If you’re a researcher, there’s a strong temptation to find a Grand Unified Theory for whatever you’re studying, whether you’re a geologist or a physicist or psychologist or from some other field.

That temptation is understandable. There’s the intellectual satisfaction of making sense of something that had previously been formless chaos; there’s the moral satisfaction of giving new insights into long-established problems; for the less lofty-minded, there’s the prospect of having a law or theory named after oneself.

Just because it’s understandable, however, doesn’t mean that it’s always a good idea. For every Grand Unified Theory that tidies up some part of the natural world, there’s at least one screwed-up bad idea that will waste other people’s time, and quite possibly increase chaos and unpleasantness.

This article explores some of the issues involved. As worked examples, I’ll start with an ancient stone map of Rome, and move on later to a Galloway dyke, illustrated below.

bannerv2Sources for original images are given at the end of this article

Continue reading