Mental models and metalanguage: Putting it all together

By Gordon Rugg

The previous articles in this series looked at mental models and ways of making sense of problems. A recurrent theme in those articles was that using the wrong model can lead to disastrous outcomes.

This raises the question of how to choose the right model to make sense of a problem. In this article, I’ll look at the issues involved in answering this question, and then look at some practical solutions.

What are the key variables?

At first sight, the issue of choosing the right model looks very similar to the well-understood issue of choosing the right statistical test. Most stats books include a clear, simple flowchart which asks you a few key questions, and then guides you to an appropriate set of tests. For example, it will ask whether your data are normally distributed or not, and whether you’re using repeated measures or not.

The key phrase here is “a few”. The flowchart approach works well if you’re dealing with a small number of variables. However, once you get beyond about half a dozen variables, the number of branches starts to become unmanageable.

So, how many models/variables do you need to consider when you’re trying to choose a suitable way to make sense of a problem?

In principle, the number is infinite, but in practice, it’s tractable. The same “usual suspects” keep cropping up in the vast majority of cases. I’ve blogged here about the issue of potentially infinite numbers of solutions, and here about the distribution of “usual suspect” cases.

A related point is that if you’re just trying to improve a situation, as opposed to finding a perfect solution, then there’s usually a good chance that you’ll be able to improve it by using a “usual suspect” method. The same few issues crop up repeatedly when you look at why bad situations arose.

One way of getting some approximate numbers is to use the knowledge cycle as a framework. This cycle starts with eliciting knowledge from people, then selecting an appropriate representation, then checking it for error, and finally teaching that knowledge to people.

A common source of problems is elicitation; for example, faulty requirements, or misleading results from a survey. To handle elicitation properly, you need a framework that deals with about a dozen types of memory, skill and communication, and with about a dozen elicitation methods.

Another common source of problems is representation. The crisp versus fuzzy issue is one example; the mocha versus Lego issue is another. This is a more open set than elicitation, but again, about a couple of dozen key concepts are enough to make much better sense of the majority of problems occurring in the human world.

The story is similar for error. The heuristics and biases approach pioneered by Kahneman, Slovic and Tversky has identified about a couple of hundred error types; classical logic has identified a similar number. In practice, about a couple of dozen types account for the majority of occurrences in the world; for instance, confirmation bias, and strong but wrong errors.

The last part of the cycle involves education and training. For these, you need to handle about a dozen types of memory, skill and communication, and to map these onto about a couple of dozen delivery methods.

Pulling these numbers together, we get a rough figure of about a hundred core concepts that will let you make significantly better sense of most problems. That’s a non-trivial number, but it’s manageable.

Framework versus toolbox

At the start of this article, I mentioned flowcharts for choosing the appropriate statistical test. That’s a different type of problem from choosing an appropriate model for understanding a problem. With the stats test issue, you’re narrowing down the possibilities until you end up with one correct answer, which may involve either just one statistical test, or a small set of tests which are all equally suitable.

With choice of models for handling problems, you’re doing several things which are very different from the stats choice. From a methodological viewpoint, you’re choosing a collection of methods which will each address different aspects of the problem. You’re also deciding which aspects of the problem to tackle in which order. In addition, you’re deciding which models and methods to use in which sequence to tackle those aspects. This level of open-ended complexity doesn’t lend itself well to a flowchart approach.

There are also important issues outside the viewpoint of research methodology. At a sordidly practical level, there’s the issue of barriers to entry for a model or method. This is a concept from business, where a would-be business has to overcome issues of cost, time, etc, to get into a field. For aircraft manufacture, one obvious barrier to entry is the cost of infrastructure; for becoming a veterinarian, a major barrier to entry is the length of time needed to become qualified.

With research models and methods, the barriers to entry usually involve time spent in learning the underpinning concepts. This is apparent when you learn how to use card sorts versus when you learn to use laddering. At first glance, card sorts look more complex, but in reality, the method is very easy to learn, with few necessary underpinning concepts. Laddering looks more simple, but in reality, if you want to use it properly, you need to learn some graph theory, plus basic facet theory; you also need to be reasonably good at mental visualisation. When you’ve got past those barriers, though, laddering is an elegantly powerful method that is invaluable for a wide range of problems which other methods can’t handle.

At the level of politics within research, there’s a famous quote from Max Planck to the effect that science advances one funeral at a time. There’s a similar quote to the effect that eminent researchers don’t change their minds, but they do die. In most fields, there’s resistance to new methods and models. If you try using a new method or model, particularly one that challenges established orthodoxy, then you need to do it carefully. The best case payoff is that you crack a major problem in your field and become a leading figure within it; however, the worst case is that you fail to crack even a small problem, and that you are marginalised within your field as a result. Even if you do manage to crack a major problem, you can expect resistance from die-hards for years or decades afterwards.

This doesn’t mean that using new methods or models is a recipe for disaster. On the contrary, when you do it right, the new methods and models can establish you as a significant player in your field. There’s a solid body of research into the key factors in successful innovation, such as Rogers’ classic book on diffusion of innovation, and if you’re interested in going down this route, then some time spent reading up on this will be a very useful investment.

Human factors

Following on from barriers to entry, one theme which occurs repeatedly in relation to models is human factors.

With the mocha model or the Lego model, it’s easy to handle the key features of the model in your working memory. When you get into the Meccano model, though, you’re soon dealing with many more variables; far too many to hold in working memory.

Similarly, a simple crisp set model requires less mental load than a combination of crisp and fuzzy sets.

When you’re experienced in methods such as systems theory or using crisp and fuzzy sets, that situation changes, because you learn how to group individual pieces of information into mental chunks, thereby reducing mental load. If you don’t have that experience, though, you can’t use chunking to reduce the load.

When you look at public debate from this perspective, then you see things in a new light. Instead of wondering why people are so lazy in their thought, you start realising that there are limits to what the human brain can handle without specialist expertise. Even if you do have specialist expertise, there are still concepts that can’t be properly handled via the human brain. For instance, chaos theory has transformed weather prediction, but the modeling for it involves enormous numbers of calculations, requiring supercomputers that can process quantities of data far beyond what any human could process.

This isn’t a huge problem in debate between researchers, if there is no time pressure for finding answers. It is, however, a major issue for political debate in the everyday world, where points need to be made fast, using concepts that most of the intended audience can understand. Historically, the overwhelming majority of points in political debate have been made verbally, which limits the available models even further, and tends to produce a dumbing down of debate.

Now, however, it’s much easier to use visual representations and multimedia in online debate. For example, you can use graphics to show that your opponent is treating something as a crisp set distinction, rather than a crisp/fuzzy/crisp distinction, or whatever. This lets you show the key problem visually in a way that has minimal barriers to understanding, and that shifts the debate into a framing that more accurately corresponds with reality.

It will be interesting to see whether this technology will produce a significant improvement in the quality and sophistication of debates, or whether it simply increases the number of insulting Photoshop memes in circulation…

Cognitive load, and span of implications

The issue of cognitive limitations has deep implications for ideologies. In religion and politics, it’s common to accuse ideological opponents of oversimplifying a topic because of stupidity or laziness. Another common accusation is that an opponent’s beliefs are inconsistent.

If we look at such cases from the viewpoint of cognitive load and of span and implications, then they look very different. In some cases, there may be a good-faith issue about the cognitive load involved in using one model, or about the mental barriers to entry for the model. Sometimes, this can be resolved by simply using a visualisation that reduces the cognitive load enough for the other party to grasp what you’re saying. The lightbulb effect when this happens is usually striking.

Similarly, if you look at the number of links that someone is using in a chain of reasoning, then apparent inconsistency or hypocrisy can often be explained as a good-faith issue of the inconsistency occurring beyond the span of implications that the person is considering. Again, a visual representation can help.

Often, though, ideological debates are not in simple good faith, but involve motivated reasoning, and other forms of bias. I won’t go into that issue here, because of space. For the moment, I’m focusing on cases of where understanding different mental models can help resolve honest misunderstandings and honest differences of opinion.

Conclusion and ways forward

In summary, the “two cultures” model of arts versus sciences has an element of truth, but it lumps together several different distinctions. When you start looking at a finer level of detail, you find that some of the most important distinctions in cultures and worldviews come down to a handful of issues such as whether you’re dealing with a system or a non-system, and whether you’re dealing with fuzzy sets or crisp sets or a mixture of both.

These issues are fairly easy to understand, but they’re very different from the ones usually mentioned in debates about the two cultures, and they’re nowhere near as widely known as they should be. The implications for research, and for ideological debates in the world, are far reaching.

Although the number of issues involved in choosing the most appropriate model for a problem is in principle infinite, in practice a high proportion of problems can be solved or significantly reduced via a tractably small number of commonly-relevant models. In this article, I’ve focused on issues and models that are particularly common in real world problems.

Often, a problem can be tackled from several different directions, each using a different model. This, however, is not the same as saying that all models are equally valid all the time. If you read the literature on disasters, you’ll find plenty of examples where a disaster was due to someone having the wrong mental model of a piece of technology. You’ll find much the same in the literature on cross-cultural relations.

So, in conclusion: Choice of models makes a difference, and knowing a broader range of models gives you more ways of making sense of the world.

That concludes this short series on mental models. I’ll return to some of the themes from this series in later articles.

Notes and links

You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.

There’s more about the theory behind this article in my latest book:

Blind Spot, by Gordon Rugg with Joseph D’Agnese

http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903

You might also find our website useful:

http://www.hydeandrugg.com/

Overviews of the articles on this blog:

https://hydeandrugg.wordpress.com/2015/01/12/the-knowledge-modelling-book/

https://hydeandrugg.wordpress.com/2015/07/24/200-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/09/19/150-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/04/28/one-hundred-hyde-rugg-articles-and-the-verifier-framework/

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.