By Gordon Rugg
The previous articles in this series looked at how everyday entities such as a cup of coffee or a Lego pack can provide templates for thinking about other subjects, particularly abstract concepts such as justice, and entities that we can’t directly observe with human senses, such as electricity.
The previous articles examined templates for handling entities that stay where they’re put. With Lego blocks or a cup of coffee, once you’ve put them into a configuration, they stay in that configuration unless something else disturbs them. The Lego blocks stay in the shape you assembled them in; the cup of coffee remains a cup of coffee.
However, not all entities behave that way. In this article, I’ll examine systems theory, and its implications for entities that don’t stay where they’re put, but instead behave in ways that are often unexpected and counter-intuitive. I’ll use Meccano as a worked example.
The image on the left of the banner picture shows a schematised representation of one Meccano bolt, two Meccano strips, and a Meccano baseplate. In this configuration, they will just stay where they are put. However, if you assemble these identical pieces into the configuration shown in the middle image, the situation changes. If you press down on one end of the horizontal bar, then it will pivot around the bolt in the middle, and the other end will go up. The Meccano has changed from being a group of unconnected pieces into being a simple system, where a change in one part of the system can cause a change in other parts of the same system.
There’s a whole body of work on systems and their behaviour. Their behaviour is often counter-intuitive, and often difficult or impossible to predict, even for quite simple systems. I’ve blogged about this topic here.
Systems and feedback loops
In the previous article in this series, I mentioned a real world case where the wrong mental model led to repeated tragedy. It was from Barbara Tuchman’s book The Guns of August, about the opening of the First World War. The German high command expected the populations they conquered to acquiesce to overwhelming force, and were astonished when they encountered resistance. Instead of questioning their expectations, they instead concluded that their force hadn’t been sufficiently overwhelming, and increased the punishments they inflicted for resistance. The results were sadly predictable.
What was going on in this case was a classic example of what’s known in systems theory as a feedback loop. An increase in the German demonstrations of force (A in the diagram below) increased the Belgians’ desire to fight back against the invaders (B below). This in turn increased the German demonstrations of force, and so on.
This type of loop is known in systems theory as a self-amplifying loop. (It’s more often known as a positive feedback loop, but that name tends to lead to confusion because in many fields positive feedback is used to mean “saying encouraging nice things”). Self-amplifying loops are usually bad news in real life, since they often involve catastrophic runaway effects.
Another type of loop is a self-deadening loop (again, more often known as a negative feedback loop, with the same potential for misunderstanding). In self-deadening loops, an increase in one variable leads to a decrease in another value, so that the two counteract each other, usually leading to stability in the system. One classic example is a thermostat; another is an autopilot in aviation.
It’s possible for systems to contain subsystems; for instance, a car is a system which consists of an engine subsystem, a suspension subsystem, a steering subsystem, and so forth.
A single change within a system can cause changes to many or all of the other parts of the system, and each of these changes can in turn have further effects on many or all parts of the system, rippling on for a duration that is usually difficult or impossible to predict, even for apparently simple systems.
A fascinating example of this is Conway’s Game of Life. This is a system which was deliberately designed to be as simple as possible. If you’re not already familiar with it, you might like to look at the Wikipedia article on it, which includes several animated examples. In brief, although the game is based on deliberately simple rules, it behaves in complex ways that its own creator wasn’t able to predict.
Emergent properties
Another common feature of systems is emergent properties. A system will usually behave in ways which couldn’t readily be predicted from a knowledge of its components, and which emerge from the interaction between the components.
The Game of Life is a particularly striking example of this. Another example is that the concept of rpm (revolutions per minute) is completely relevant to a working car engine, but would be meaningless if applied to the components of that same engine before they were assembled. I’ve discussed this in more detail in my article on systems theory, and in my article about range of convenience, i.e. the range of contexts in which a term can be meaningfully used.
A lot of people have trouble getting their heads round the concept of emergent properties. There’s been a lot of resistance in theology, philosophy and popular culture to the idea that something like life or consciousness might be nothing more than an emergent property of a particular system, as opposed to being the result of some mystical unique mystery such as the “spark of life”.
Emergent properties have a habit of cropping up in unexpected places. One example is the concept of sexism without sexists, where system properties lead to an outcome that might never have been thought of by the people who set up the system. For instance, many university departments have a tradition of seminars with outside speakers being scheduled at the end of the afternoon, followed by discussions in the pub. This combination of timing and location tends to be difficult for parents, for people whose religion disapproves of alcohol and pubs, and for carers, among others. This leads to these groups being under-represented in their profession, which in turn leads to them not being considered sufficiently when seminar slots are being arranged, which leads to a self-amplifying feedback loop.
Implications
Most of the world consists of systems of one sort or another, whether natural systems, or human systems such as legal and political systems.
However, knowledge of systems theory is very patchy. In some fields, it’s taken completely for granted as a core concept. In other fields, some people use it routinely, and others have never heard of it.
One surprisingly common misconception about systems is that because they’re often complex and difficult to predict, there’s no point in trying to model them, particularly for human social systems.
In reality, although systems can be complex and difficult to predict, they often show considerable stability and predictability. Even within chaotic systems, where a tiny change to the initial configuration has huge effects later on, there are often underlying regularities that you can work with once you know about them.
Another practical issue is that if you’re familiar with basic systems theory, you can design at least some problems out of a system. For instance, if you spot a self-amplifying loop in a system design, you will probably want to look at it very closely indeed, in case it’s going to cause problems.
Another classic is spotting where a system contains incentives for a subsystem to improve things for itself at the expense of other subsystems and of the system as a whole, as happened with incentive structures in financial systems before the 2008 financial collapse.
You can also use knowledge of systems theory to work backwards from an unwelcome outcome, to identify likely causes for it, and to identify possible solutions.
For more detailed handling of systems theory, though, you need to get into some pretty heavy modeling. This takes you beyond what can be handled by the unaided human cognitive system, which is probably why systems theory isn’t more widely used in everyday non-specialist mental models.
The next article in this series will pull together themes from the articles so far, and will look at the question of how to choose the appropriate mental model for a given problem.
Notes and links
You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.
There’s more about the theory behind this article in my latest book:
Blind Spot, by Gordon Rugg with Joseph D’Agnese
http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903
You might also find our website useful:
Overviews of the articles on this blog:
https://hydeandrugg.wordpress.com/2015/01/12/the-knowledge-modelling-book/
https://hydeandrugg.wordpress.com/2015/07/24/200-posts-and-counting/
https://hydeandrugg.wordpress.com/2014/09/19/150-posts-and-counting/
Pingback: Mental models and metalanguage: Putting it all together | hyde and rugg
Pingback: Mental models, and making sense of crazy uncles | hyde and rugg