By Gordon Rugg
Systems theory is about what happens when individual items are connected and become a system. “Items” in this context can be anything physical and/or abstract, which gives you a pretty huge scope. Systems are ubiquitous. Examples include mechanical systems such as vehicles; social systems such as organisations or countries; and logical systems, such as software. Many of these systems can cause disasters when they fail, as in the examples of nuclear power plant safety systems or autopilot systems in aircraft; systems are important.
There are regularities in how systems behave, and some of those regularities are both counter-intuitive and extremely important. That’s a potentially dangerous combination.
If you understand systems theory, then the world makes a lot more sense, particularly if you combine it with game theory, which will be the topic of one of my next articles. Most questions that start with “Why don’t they…?” can be answered either with “Resources” or “Systems theory” or “Game theory”.
In this article, I’ll look at some core concepts from systems theory.
The usual definition of a system is along the lines of “two or more interconnected and interacting entities”. I’m deliberately not going to spend much time discussing detailed definitions; Instead, I’ll focus on four core concepts:
- Properties of systems and entities
- Feedback loops
- Lag
- Improving sub-systems won’t necessarily improve the system, and may actually make it worse
Properties of systems and entities
The first core concept is that a system consists of entities that are interconnected and interacting. It’s a simple concept, but it leads into some concepts with very far-reaching implications.
Here’s an example.
The picture above shows three entities, namely an arrow, a bow and a bowstring. Each of these entities has properties – for example, they all have a length and a weight.
If we combine these entities, they now form a system, as in the picture below.
This system can do things that the individual entities couldn’t do. The most obvious example is that the system can shoot the arrow at speed for a significant distance.
Although this is a system with very few components, it illustrates some profound points.
Systems within systems
One point is that systems can be nested within other systems, as in the diagram below, where each box is a system or subsystem.
Often, this nesting is many layers deep. The image below shows the same system and subsystems as in the previous image, shown vertically separated into four layers so that the levels of nesting are easier to see.
Readers familiar with archery will probably already have spotted that the bow itself (as opposed to the bow plus bowstring system) is a system, composed of a layer that is strongly elastic (the back of the bow) and a layer that is strong in compression (the belly of the bow). Similarly, the arrow is itself a system, composed of the head (which affects the aerodynamics) and the shaft (whose properties affect how the arrow behaves when leaving the bow) and the fletching (the feathers, whose shape and position and other properties have a major effect on how the arrow behaves in flight). In both cases, the sub-components need to be combined in a particular way to produce a working system.
Emergent properties
One key point about systems is that a system will almost certainly have properties that its individual component entities don’t have. For example, the system of bow plus bowstring plus arrow has properties such as draw weight (the amount of force needed to pull the bow back for the length of the arrow shaft) and range (the distance that the bow can shoot the arrow). Neither of these concepts can meaningfully be applied to the components of this system on their own.
Similarly, using a bigger-scale example, the system of a Land Rover has properties such as speed that can only be applied to the system as a whole, not to the component parts in isolation.
If you’re dealing with a deeply nested set of systems, then each layer of the overall system will probably have different properties from the ones above and below. In the case of a Land Rover, for instance, the engine sub-system will have properties that can’t be meaningfully applied to the chassis sub-system, such as maximum revolutions per minute.
Properties that emerge at different levels of nesting are usually referred to as emergent properties.
This concept overlaps considerably with the very useful concept of range of convenience, i.e. the range of contexts within which a particular term can meaningfully be applied.
The concept of emergent properties has often been misunderstood in relation to the concept of reductionism.
One interpretation of reductionism argues that you can understand a complex system, such as a pack of wolves, by looking at each of the nested levels of subsystem involved until you reach the level of individual atoms, and by understanding the properties of each subsystem; this tells you everything that there is to know. This is shown in the image below as a set of coloured bands, one for each layer of subsystem. Only by combining the information from all the layers will you have a complete understanding of the system and its component subsystems.
Another interpretation of reductionism, usually either as a straw man or as a misunderstanding, is the idea that you only need to know the properties of the lowest-level systems, and that you will then know everything there is to know about the higher levels.
This issue has generated a lot of anger among those who believe that science is a reductionist enterprise in the latter sense, which implicitly views activities such as art and music and altruism as being nothing more than atoms interacting with each other. In reality, though, science is very much about the other model of reductionism, where it’s essential to understand the properties specific to the level(s) of nesting on which your research is focused, not just the properties of the lowest level.
In the case of the wolf pack, for instance, there would be a layer of attributes such as pack size or pack hunting strategy that could only apply meaningfully to the pack as a whole, and then other attributes such as sex that could only be applied meaningfully to each individual wolf, and still other attributes that could only be applied meaningfully to the organs of an individual wolf, such as its lung capacity, and so on via the cellular level to the molecular and atomic level.
Back to systems theory proper…
Feedback loops
Another key feature of systems is that they often include feedback loops, which have a habit of producing unpleasant and unexpected results.
Before we get into a detailed analysis of feedback loops, an important point to note is that some of the key concepts involved have been widely adopted in vernacular English, but with very different meanings. This frequently leads to deep misunderstandings. I’ll pick up these differences as they arise.
A feedback loop involves two or more components within a system affecting each other in a loop. In the simplest case, component A affects component B; component B then affects component A, which then affects component B, and so on. This can either go on forever, or go on until something breaks the loop.
We can show this process diagrammatically, as in the image below.
There’s a curving arrow symbol from A to B, showing that A affects B, and there’s a similar (but deliberately not identical) arrow from B to A, showing that B affects A. It’s a loop, with no specified mechanism for breaking the loop.
I’ve included a couple of small boxes labelled L1 and L2 at the start of each arrow symbol, deliberately different from each other in size and colour, to represent lag in the system; I’ll discuss the concept of lag in the next section.
There’s a reason that I’ve deliberately not made the diagram symmetrical. It’s because the way that A affects B isn’t necessarily the same as the way that B affects A. It may happen to be the same, but it doesn’t have to be. This can have far-reaching implications for how the system works, and for how the system can fail. This is a big issue when designing safety-critical systems such as autopilot systems in aircraft, or systems that affect very large numbers of lives, such as an education system.
A simple example of this asymmetry is an electric bell. One design involves using an electromagnet to pull the bell hammer in one direction, and a mechanical spring to pull it in the other direction. These are two very different mechanisms, but they’re closely linked to each other within the system.
The next issue to consider is how A and B affect each other.
The loop between A and B can take four significant forms:
- A increases B and B increases A
- A increases B and B decreases A
- A decreases B and B decreases A
- A decreases B and B increases A
All of these cases are known technically as feedback loops.
Unfortunately, the concept of feedback has been picked up in vernacular English, and used with very different meanings, generating a lot of potential for confusion. I’ll discuss these issues as they arise.
A increasing B, and B increasing A
Here’s an example of one type of feedback loop. In the technical sense, it’s known as a positive feedback loop. This means something very different from the vernacular sense of “positive feedback”. I’ll discuss the differences and their implications below.
In the image above, A is causing an increase in B, and B is causing an increase in A. It’s a positive feedback loop, but “positive” in this technical sense does not mean “good” or “encouraging”. Instead, it just means “increasing”. Whether human beings would like or dislike that increase is a completely different issue. “Positive feedback” in its original systems theory sense is a completely different concept from “positive feedback” in the vernacular sense of the term.
This would be a bad enough source of confusion on its own. However, the full story is worse. In systems theory, positive feedback loops usually mean trouble. Here’s an example.
In a forest fire, the flames generate heat, which makes the heated air rise. This rise causes more air to be sucked in from the surrounding area, producing winds, which make the fire burn more fiercely. This in turn increases the amount of heated air rising from the fire, which in turn increases the speed at which more air is sucked in from the surrounding area, which in turn makes the fire burn even more fiercely. This process of self-perpetuating increase will continue until something makes it break down.
Most positive feedback loops contain the potential for this type of escalation. So what about other types of loop?
A decreasing B, and B decreasing A
You can have a system where A and B are each causing the other to decrease further in turn. This is also often not a good thing. A classic example is breakdown of trust between two people or two groups, where each action by one provokes a counter-action by the other, leading to a downward spiral in trust. It’s a situation well known to marriage guidance advisors and to industrial relations mediators. However, it can be a good thing, as in political de-escalation, where tensions decrease step by step between the parties. The difference between this and the previous case is largely a matter of semantics – a decrease of tension could be re-phrased as an increase in calm, for instance.
A and B affecting each other in opposite directions
Another common type of loop involves A and B affecting each other in opposite directions. In these cases, an increase in A will lead to a decrease in B, or vice versa. This type of loop can cause stability within the system, by keeping any increases limited to a manageable level.
There’s a feedback loop of this sort that can affect forest fires. Sometimes, the air that the fire sucks in is moist. In such cases, the rising air can eventually produce rainclouds, and the resulting rain can decrease or put out the fire.
A wide range of systems deliberately include stabilising loops of this sort. In political systems, the concept of checks and balances within a constitution is an example; aircraft autopilot systems are an example of a mechanical system that uses this principle to increase safety.
I’ll briefly discuss autopilot systems, since these illustrate some other important points about systems theory.
In brief, an autopilot works by checking actual direction and height against the planned direction and height. If the aircraft has drifted too far to the left, then the autopilot nudges it to the right until it’s back within an acceptable range; similarly, if the aircraft has drifted too high, then the autopilot will reduce its altitude until the altitude is acceptable.
The core concept is that simple. One huge advantage of this simplicity is that it works regardless of the complexity of the surrounding environment. The autopilot doesn’t have to try modelling the enormously complex weather systems that could blow the plane off course; instead, it just modifies the course until the modification has compensated for whichever winds are blowing.
The implications for other systems, such as economic systems, are obvious and enormous. If you can find and build in the appropriate feedback loops into the system, then you can probably keep the system stable, even if you can’t predict what the wider world will throw at it, and even if you don’t yet fully understand the complete system. (Yes, that’s still a big “if,” but the principle is sound, as demonstrated by the autopilot versus the weather system.)
Autopilot systems demonstrate another important concept. The autopilot only makes a course or altitude correction when the aircraft has drifted more than a particular distance off course. There’s a very practical and sensible reason for this. If you didn’t build in this deliberate delay, then the autopilot would be fussing around with tiny adjustments every second, which wouldn’t be good in terms of wear and tear on the mechanical components, and also wouldn’t make much difference in overall accuracy. Anyone working in the health service or education will be all too familiar with what happens when governments fuss around with drastic re-organisations before the dust from the previous intervention has settled. It’s usually not a good idea.
This delay between event and response is technically known as lag. In some fields, such as computer games, lag is viewed as a huge problem. In systems theory, however, lag is value-neutral. Sometimes it’s a bad thing, but surprisingly often, lag is very useful indeed.
Lag
Lag has a lot of similarities to concepts such as slack and spare capacity and redundancy in organisational theory and information theory respectively. All of these concepts are usually viewed as a bad thing by novices; all of them are in reality much more complex and multi-faceted, and all of them can be extremely useful when applied properly.
Experts in safety-critical system design become very twitchy when they see a design that has little lag in it. Experts in organisational behaviour react similarly when they’re told that an organisation is efficient or lean or that it has little waste. Why do they react that way? It all makes sense if you focus on what happens when things go wrong, rather than on what happens when things are going right.
If something goes wrong in a system that has very little lag in it, then the problems will escalate very quickly indeed. That’s not a situation that anyone would want to be in. Anyone with sense would prefer a situation where the problem takes more time to become serious, so that the humans involved have a chance to fix it before it gets too bad.
Similarly, if you’re in an organisation which is lean and efficient, what happens when there’s an outbreak of flu and staff are staying home because they’re ill? Because the organisation is lean and efficient, there’s no spare capacity to handle the sudden shortfall. That’s a major problem, but it’s a completely avoidable one, and this concept has been well known in organisational theory for a century or more – it’s one of the key findings from the work of Max Weber. Short-term efficiency can be the enemy of long-term efficiency; delays in response can make a system more stable.
As an example of these concepts in practice, John Seddon has done a lot of excellent work in this area. A good introduction to his work is his California Faculty Association lecture, which is on YouTube, in several short parts; this link is to the second part, where he starts getting into the detail of his topic.
Sub-system improvements don’t always improve the system, and may make it worse
This is one of the most counter-intuitive findings to emerge from systems theory; it has major implications for any attempt to improve a system, such as the education system or the health system.
At first sight, it might appear obvious that improving sub-systems within a system will automatically and inevitably bring about improvements in the system as a whole. This assumption is often a central feature of high-level policy. However, it simply isn’t true.
Often, improving sub-systems has no effect on the quality of the system as a whole, whether that system is mechanical or organisational or software. Quite often, improving sub-systems actually reduces the quality of the system as a whole.
I’ve written about this in some detail here. In brief, there are several ways that this effect can occur. One way is that improvements in one sub-system can bring problems for other sub-systems; for instance, fitting a more powerful engine into a car can cause problems for the brakes sub-system, the fuel sub-system and possibly the suspension sub-system. Another possibility is that sub-systems improve at the expense of the system as a whole – for instance, departments in an organisation might all improve their productivity by referring all customer complaints to the head office, meaning that the complaint handling becomes much more cumbersome and expensive to the organisation as a whole, since it’s now being handled by people who aren’t familiar with the relevant specific details.
As an example of getting it right, here’s Roy Lilley elegantly, eloquently and humorously discussing this principle with regard to healthcare purchasing policy:
The converse principle also applies; making a system as a whole more efficient can have no effect on individual sub-systems, or can make sub-systems less efficient.
This issue has major, extremely practical, implications for policy, but it’s not widely known. The result is that policy is often driven by assumptions that are simply wrong, and that make matters worse rather than better. It’s a particular risk when someone is peddling a clear, simple, big-picture solution; such proposed solutions are often clear and simple because they’re simply ignoring the inconvenient realities which doom those proposed solutions to inevitable failure, causing financial and human pain along the way.
Closing thoughts
This article has looked at some key concepts from systems theory. There’s a lot more detail that I haven’t discussed, for brevity. I haven’t gone into concepts such as soft systems and hard systems, or sources and sinks, or homeostasis, for example.
Systems are important, and ubiquitous, and they often behave in ways which are actually quite simple, but which are very different from what most people would expect. Systems theory is powerful, but it can appear very paradoxical at first sight.
This has profound implications for anyone attempting to improve a system, particularly a huge, complex system such as a national health system or a national education system. Often, ideas that look simple and attractive are actually disastrously wrong, because they have failed to consider basic issues from systems theory. Conversely, often it’s possible to fix apparently intractable problems by using some very simple concepts from system theory.
In my next article in this mini-series, I’ll look at game theory, and how that interacts with systems theory and with real-world systems problems. I’ll then look at the mathematics of desire, and how our biases steer us towards some possibilities and away from others. Finally, I’ll bring these themes together in an examination of belief systems and their implications for education and related fields.
Notes
You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.
There’s more about the theory behind this article in my latest book:
Blind Spot, by Gordon Rugg with Joseph D’Agnese
http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903
Pingback: Systems Theory | Systems Theory | Scoop.it
Pingback: 150 posts and counting | hyde and rugg
Pingback: The Knowledge Modelling Book | hyde and rugg
Pingback: People in architectural drawings, part 2; the mathematics of desire | hyde and rugg
Pingback: The 28 versions of the Golden Age | hyde and rugg
Pingback: Why Hollywood gets it wrong, part 4 | hyde and rugg
Pingback: Mental models, worldviews, and mocha | hyde and rugg
Pingback: Mental models, worldviews, Meccano, and systems theory | hyde and rugg
Pingback: Mental models and metalanguage: Putting it all together | hyde and rugg
Pingback: Modeling misunderstandings | hyde and rugg