Premature closure and authoritarian worldviews

By Gordon Rugg

In a previous article, I looked at the belief structures of the archetypal “crazy uncle” worldview. It’s a worldview with a strong tendency towards whichever option requires the minimum short-term cognitive load; for example, binary yes/no categorisations rather than greyscales or multiple categories.

One theme I didn’t explore in that article, for reasons of space, was premature closure. This article picks up that theme.

Premature closure is closely related to pre-emptive categorisation, which I’ll also discuss in this article. Both these concepts have significant implications, and both involve minimising short-term cognitive load, usually leading to problems further down the road. Both tend to be strongly associated with the authoritarian worldview, for reasons that I’ll unpack later.

So, what is premature closure? In brief, it’s when you make a decision too early, closing down prematurely the process of search and evaluation and decision-making. This takes various forms; knowing these forms improves your chances of stopping at the right point. For clarity, I’ll use examples within the context of common “crazy uncle” arguments.

One common form involves a binary, yes/no decision based on whether there is any evidence supporting your pet belief, as shown below. The usual format is to present this as proving that the pet belief is right. In the examples below, I’ve shown unsound reasoning in italic, and more solid reasoning in bold.

What happens when someone produces evidence against your pet belief? A common “crazy uncle” response is to claim that this shows there’s equal evidence on both sides, therefore it’s just a matter of opinion, and therefore their opinion is just as valid as the other person’s.

So far, we’ve been looking at binary options; whether there’s evidence for your opinion (yes/no) and whether there’s evidence against your opinion (also yes/no). What happens when there’s clearly more than one piece of evidence?

One common form of dodgy argument is to treat all the pieces of evidence as having the same weight. This is a key feature of formal debates, where tactics such as the Gish gallop involve playing on the debate scoring system by making large numbers of claims that the other side won’t be able to refute within the time available. It’s also a common tactic in online debates, where a commenter puts up lengthy posts with huge numbers of anecdotal claims and factoids, then claims victory because their opponents haven’t refuted all of those claims and factoids.

This problem can be reduced by using some form of weighting for evidence, and acknowledging that some evidence is much more weighty than other evidence. In the case of continental drift, for example, there were many pieces of evidence suggesting that Africa and the Americas had once been joined, but there was a single, massive, show-stopping counter-argument; nobody could work out how vast continents of solid rock could move thousands of miles.

However, weighting the evidence doesn’t guarantee sound conclusions. For many problems, a lot of the necessary evidence hasn’t been gathered yet. A common bad faith tactic is to present only the weight of existing evidence, framed as if it is the whole story, and ignoring the issue of whether more evidence needs to be gathered. Fixing this problem takes you into questions of research design; there isn’t an easy answer. Still, even if you can’t guarantee getting a right answer, you can reduce your chances of getting a wrong answer, which is a good start.

Satisficing and optimising

The issue of premature closure overlaps with other issues. One involves decision-making via satisficing versus decision-making via optimising. Satisficing is about making a good-enough choice; optimising is about making the best choice possible. For example, if you need a pencil to write down a phone number, any pencil will do; however, if you need a pencil because you’re a professional artist about to do a commissioned drawing, then choosing the right pencil for the job is important.

When you’re satisficing, the sensible tactic is to stop searching as soon as you’ve found a solution that’s good enough; any further searching would just be a waste of time and effort. When you’re optimising, in contrast, you need to factor in the prospects of finding a significantly better solution, versus the cost of the search, which takes you into some heavy issues of statistics and of decision theory.

Pre-emptive categorisation

Pre-emptive categorisation is related to premature closure, but isn’t identical. It involves assigning someone or something to a category and then stopping at that point, without considering whether they might also belong to other categories, as shown below.

This is an issue in everyday life for anyone who belongs to more than one minority; for instance, being left handed and belonging to an ethnic minority. Many systems implicitly assume that people only belong to one category, and don’t consider what happens when categories overlap, or when someone might belong to more than one category.

This issue is well recognised in politics and sociology, via concepts such as intersectionality, but it also occurs in a wide range of other settings, as a by-product of limitations in human cognition. For instance, pre-emptive categorisation can occur with regard to attributes of products or services or of objects in the natural world.

There are ways of reducing this risk; for example, by using a visual representation that explicitly includes all the relevant identified categories, as shown below. A further refinement is to include in this representation at least one blank “other” category, to nudge users towards checking whether there is anything else that needs to be considered in the context where they’re working. The reason for having more than one “other” box is to reduce the risk of premature closure in searching for “other” categories.

In this example, the first two standard categories apply, but the third standard category doesn’t; searching for “other” categories has produced one such category, but the search did not find any others; this is a significant absence, as opposed to a simple absence.

This leads in to the concluding topic for this article.

Rethinking the “authoritarian worldview“

There’s been a lot of excellent research in the “authoritarian personality” tradition, as regards what that cluster of beliefs consists of. However, there’s also the question about where those beliefs come from. The usual answers involve cultural transmission, and social dynamics, and memeplexes of ideas that interlink with each other.

All of those are valid points, but as this article and the previous one showed, there’s a strong argument for reframing this issue to separate the sociopolitical issues from the neurocognitive ones. Most of the characteristics associated with the “authoritarian personality” or the “authoritarian worldview” can be simply explained as outcomes of a “minimal short term cognitive load” approach to beliefs and decision making.

This has significant implications. For instance, it implies that the mechanisms described above will apply across human behaviour, not just in politics and religion. It also implies that any attempt to improve the situation needs to factor in these cognitive limiting factors into discussions and explanations.

This is where two concepts are particularly promising.

One is simplicity beyond complexity. Often, there’s a simple obvious answer, but the full story is more complex; however, beyond that complexity, there’s often an elegantly simple explanation that handles the complexity, and that is easy for anyone to understand.

The other concept involves using the appropriate representation to show the simplicity beyond complexity in a way that is clear and easy to understand. We have blogged about various ways of doing this; for example, here and here.

So, pulling all this together:

  • The “authoritarian worldview” looks like one form of a broader phenomenon, namely the “minimal short term cognitive load” approach to beliefs and decision making.
  • There are simple and effective ways of reducing errors, misunderstandings and conflict caused by that approach.

That’s a positive note on which to end, so I’ll end here.

Notes and links

You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.

There’s more about the theory behind this article in my latest book:

Blind Spot, by Gordon Rugg with Joseph D’Agnese

http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903

You might also find our website useful:

http://www.hydeandrugg.com/

Overviews of the articles on this blog:

https://hydeandrugg.wordpress.com/2015/01/12/the-knowledge-modelling-book/

https://hydeandrugg.wordpress.com/2015/07/24/200-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/09/19/150-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/04/28/one-hundred-hyde-rugg-articles-and-the-verifier-framework/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.