Mental models, worldviews, and the span of consistency

By Gordon Rugg

In politics and religion, a common accusation is that someone is being hypocritical or inconsistent. The previous article in this series looked at how this can arise from the irregular adjective approach to other groups; for example, “Our soldiers are brave” versus “Their soldiers are fanatical” when describing otherwise identical actions.

Often, though, inconsistency is an almost inevitable consequence of dealing with complexity. Mainstream political movements, like organised religions, spend a lot of time and effort in identifying and resolving inherent contradictions within their worldview. This process takes a lot of time and effort because of the sheer number of possible combinations of beliefs within any mature worldview.

In this article, I’ll work through the implications of this simple but extremely significant issue.

Suppose, for example, that you’re putting together a chain of reasoning that begins with assertion A, and ends in conclusion F, as in the diagram below. (For simplicity, I’m using a linear chain as an example, to keep things manageable.)

You decide that you want to check that all of the points in that chain are consistent with each other. How much effort will it be? It will actually be a lot of effort. Here’s what the set of possible cross-checks looks like for just the first five points (A to E). The internally consistent cross-checks are shown in blue; an inconsistent link, between A and D, is shown in red.

I’ve shown the cross-checks for five points for two reasons.

One is that the number of cross-checks involved (10) is already beyond the limits of normal human working memory (about 7, plus or minus 2, items). To work out this many cross-checks in your head would be unrealistic, unless you had a very well-rehearsed method for doing this. For simplicity, I’ve assumed here that consistency between point A and point B is the same as consistency between point B and point A, which halves the number of cross-checks. This assumption isn’t always true, but I’ll leave that issue to one side for the time being; the key point remains, that the number of combinations between points rapidly becomes enormous.

The second reason I’ve used this number of points is that five points is what you get if you check for just two points before the current point and for two points after it. Even that small number is already more than human working memory can handle.

What happens if you instead use a running check of only the point before and the point after the current point? You get the results below for the same set of points.

For the first full step, looking each side of point B, all three points are internally consistent, like this.

For the second step, looking each side of point C, all three points being checked are also internally consistent, like this.

However, this approach has missed the inconsistency between A and D, which was picked up by looking two points forward and two points back.

Implications

So, whether or not you detect an inconsistency in a chain of reasoning depends on the span of links that you’re looking at. Worldviews and ideologies involve so many points that the number of possible cross-checks is well beyond what a human being could work through in a lifetime. So, inconsistencies are going to occur within any worldview.

How do people handle inconsistencies when these are brought to their attention? In brief, the usual responses involve various forms of dodgy reasoning, rather than any serious attempt to work through the implications of the inconsistency. This is particularly the case when the implications are close to the individual’s core beliefs.

It’s tempting to believe that if you look closely enough at the links you’re examining, you’ll catch the inconsistencies anyway, so you don’t need to worry about this problem. That, however, isn’t true.

A classic example is Zeno’s paradox, which I’ve blogged about here.

I used the version of the paradox which features the archer’s arrow on its way to a target. Before it can reach the target, it has to cover half the distance. Before that, though, it has to cover a quarter of the distance, and before that it has to cover an eighth of the distance. You can repeat this process of halving the distance an infinite number of times. The implication is that because this process can go on for an infinite number of times, it’s never completed, and therefore movement is impossible.

This chain of reasoning is clearly wrong, but the reasons for it being wrong are subtle and complex, involving the nature of infinity, which gets pretty weird.

So, if it’s hard to see subtle errors in a chain of reasoning about something as familiar as movement, what are the odds of spotting subtle errors in a chain of reasoning about something less familiar? In brief: Poor to negligible.

What can you do about this problem if you’re making a good-faith attempt to put together a worldview, or just a small chain of reasoning about a problem?

That’s the issue at the heart of the scientific method. A key point about science is that it’s a process, not a set of fixed beliefs. Scientific knowledge is always provisional, not absolute. It keeps being changed as new discoveries come in. Sometimes those changes are tiny; sometimes they’re huge.

This concept is very different from the core concept of political and religious worldviews, which are usually based on allegedly fixed core beliefs. Because of the sheer number of beliefs in any serious worldview, it’s simply not possible to check all the possible combinations of beliefs for consistency within a human lifetime. Even if you have large numbers of believers checking for inconsistencies, there’s still the issue of how to fix those inconsistencies without causing schisms. The history of political and religious disagreements doesn’t inspire much confidence in people’s ability to resolve such schisms in an amicable manner that doesn’t lead to a body count.

In practice, core beliefs often morph over time in grudging response to the realities of the day, usually at a slow enough pace to be imperceptible on a day to day basis, thereby reducing the risk of being perceived as inconsistencies by the faithful.

Which takes us back to the starting point of this article, and is as good a place as any to stop.

In the next article, I’ll look at a mental model derived from everyday experience, and at what happens when you extend that model from the physical world to perceptions of virtue and sin.

Some related concepts and further reading

There has been a lot of work in mathematics and related fields on the problem of combinations. The mathematical issue of increasing numbers of possible combinations is a core part of the travelling salesman problem. This is a well-recognised major problem for a wide range of fields, since the number of possible combinations grows exponentially; for even a few dozen points, the number of combinations is far too large for any human being to work through within a lifetime.

There’s also a lot of work within graph theory on related concepts, such as finding the shortest route between two points, which has big implications for fields as diverse as electronics and route finding in satnav systems.

The topic of dodgy reasoning in this context has been researched from numerous perspectives, such as motivated reasoning, confirmation bias, cognitive dissonance, and loss aversion.

One field whose connection with span of consistency is less obvious, but tantalising, is criminology. A common theme in petty crime is that the individual doesn’t appear to have thought through the consequences of their actions. This has been connected in the literature with development of the frontal lobes of the brain, which are heavily involved in planning. This isn’t exactly the same as span of consistency, but it’s very similar, in terms of scanning ahead for various numbers of steps.

The issue of consistency across links also relates to a topic that I’ve blogged about several times, namely how horror and humour relate to Necker shifts, when you suddenly perceive something in an utterly different way. I’ll return to this theme in a later article.

Notes and links

You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.

There’s more about the theory behind this article in my latest book: Blind Spot, by Gordon Rugg with Joseph D’Agnese

http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903

You might also find our website useful:

http://www.hydeandrugg.com/

Overviews of the articles on this blog:

https://hydeandrugg.wordpress.com/2015/01/12/the-knowledge-modelling-book/

https://hydeandrugg.wordpress.com/2015/07/24/200-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/09/19/150-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/04/28/one-hundred-hyde-rugg-articles-and-the-verifier-framework/

 

Advertisements

1 thought on “Mental models, worldviews, and the span of consistency

  1. Pingback: Mental models, and making sense of crazy uncles | hyde and rugg

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.