By Gordon Rugg

I’m interested in game theory for various reasons. One reason is that it makes sense of a wide range of phenomena which otherwise look baffling.

Another is that it can be combined with approaches such as *Transactional Analysis* and *script theory*, to provide systematic and rigorous analyses of how individual people view the world, and what the likely outcomes will be when those views collide with other people’s views, or with the things that life throws at us.

It’s a powerful technique, and it often produces unexpected and counter-intuitive results.

**Background**

If there was ever a competition for the most easily misunderstood name for a concept, then *game theory* is likely to finish in the top ten. The “game” part suggests that it’s something frivolous, or just about games, but it’s neither of those. The “theory” part suggests to non-researchers that it’s just an unproven idea, which is also wrong, on multiple levels. It would be more accurately described as something like *strategy payoff matrix analysis*, but that probably wouldn’t catch on.

Game theory is about assessing the likely outcomes from a particular course of action. It gets its name from its origins a couple of centuries ago in games of chance, where card players used it to work out the chance of winning with a given hand of cards. Professional gambling is a serious business, and from those beginnings, game theory was adopted in a range of other serious fields, such as evolutionary ecology, where it was used to work out which strategies would lead to a species surviving or dying, and warfare, where the US armed forces used it in the Vietnam War to assess the outcomes from different bombing strategies. Military use and abuse of game theory left the approach tainted by association in some disciplines; in other disciplines, it’s a core part of how research is done.

Here’s an example of the underlying principle.

Imagine that you’re a hungry fox, searching for food on a snowy winter day. You’re approaching a roadside picnic site, and you see an abandoned hamburger on one of the tables.

You’re about to trot over to the table when you spot an eagle flying down towards the hamburger. You now have to make a decision. Do you try to scare off the eagle, or do you forget about the hamburger and move on to somewhere else? It’s not an easy decision. A hamburger would be enough food for the day, which is a serious consideration in winter. There’s a sporting chance that the scaring-off tactic would work. However, there’s also a real risk that the eagle would refuse to be scared off, and would fight for the food. A fox and an eagle are fairly well matched, so the fight would probably result in injuries to both animals.

In game theory, you can plot the options and the possible outcomes as a matrix. The matrix below shows the strategies and the outcomes from the fox’s point of view. The usual convention in game theory is to show the outcomes from both viewpoints within a single matrix, but that makes the matrix less easy to understand if you’re new to game theory, so I’m just showing one viewpoint, namely the fox’s, in the matrix below.

The upper row of outcomes in the matrix shows what happens if the eagle isn’t in the mood for a fight; the lower row of outcomes shows what happens if the eagle is aggressive. I’ve unpacked some issues about these outcomes in more detail below.

From the fox’s point of view, the “go in aggressively” strategy is the only one that has a chance of getting the food. However, it’s also the only strategy that could result in the fox being injured. The key question now is whether the loss from the injuries will outweigh any benefits. If the fox loses the fight with the eagle, the injuries will be a straight loss, with no gains to show; if the fox wins the fight, it will have gains from the food, but losses from the injuries sustained in the fight.

So is there some way of measuring the gains and losses in this type of matrix? In many fields, there is a way of doing just that.

For the example above, an evolutionary ecologist could quantify the potential gain in terms of the *calory value of the food*, and could quantify the losses from the “abandon burger” strategy in terms of *the calories expended in finding the next meal*. Measuring the losses from injury is trickier, but possible; for instance, the calories expended in finding the next meal would probably be higher for an injured animal, or you could measure the average calorie value of the food that the animal found while it was recovering from the injuries, which would probably be lower than the average value after it had recovered.

There’s a whole body of research on *foraging theory* which deals with exactly this type of problem. The classic work was on topics such as strategies used by bees to maximise the amount of pollen and nectar they could gather relative to the energy they expended flying between flowers.

That may sound a bit esoteric, but the same underlying principles apply to a wide range of areas, including consumer behaviour, and online search behaviour, which is of considerable interest to companies such as Google. For instance, one strategy that works well for some types of problem is to do a lot of foraging in a small area, and then make a large move to a new location, where you then repeat the strategy of foraging within a small area around that new location. This has very different implications, in terms of understanding consumer behaviour, from the strategy of moving steadily in one direction while foraging, or the strategy of moving in random directions for random distances.

**Core concepts**

In this article, I’ll discuss some core concepts from game theory, namely :

- Zero sum games
- Mimimax and other payoff types
- Repeated encounters versus single encounters
- Stochastic game theory
- Evolutionarily Stable Strategies
- Strategies
- Currencies

These aren’t the only significant concepts in game theory, but they’re concepts that particularly interest me from a knowledge modelling viewpoint, and that have far-reaching implications in other fields.

*Zero sum games*

In some interactions, one person’s losses become another person’s gains; the gains and losses exactly cancel each other out. This type of interaction is technically known as a *zero sum game, *because the losses and the gains sum (i.e. add up) to zero.

Not all interactions are zero sum games; some interactions are *non-zero sum games*. A classic example is biological systems, where both parties can benefit from an interaction – for instance, insects getting nectar from plants (benefitting the insects) and in the process pollinating the plants (which benefits the plants).

There’s a plausible story that when US politicians first started using game theory experts as advisors during the Cold War, the game theorists were horrified to discover that the politicians viewed all interactions between the West and the Soviet bloc as a zero sum game.

Zero sum games crop up in a wide range of places. Performance league tables, for instance, are zero sum games. This can have a chilling effect on the spread of best practice, since in a zero sum game each player has a strong incentive to keep their good ideas secret, rather than sharing them.

There’s an insightful discussion of this issue, in relation to education league tables, performance related pay, and medical innovation, here:

http://iteachmaths.blogspot.co.uk/2014/08/is-prp-good-thing.html

In a nutshell, the author shows that you can either have league tables, or you can have rapid spread of best practice innovations, but it’s normally an either/or choice.

*Mimimax and other payoff types*

There are names for the different types of payoffs, relating to the loss and the gain associated with each strategy – for instance, some strategies minimise the likely loss, but also minimise the likely gain (*minimin*) whereas others risk the maximum loss, but maximise the gain if they succeed (maximax). Bureaucracies usually favour minimin, whereas entrepreneurs tend to be comfortable with maximax.

An example of minimum risk and maximum payoff is buying a lottery ticket, which will at worst cost the price of the ticket, and at most will win the lottery. Businesses normally favour this minimax strategy when possible, but it isn’t always possible.

Examples of maximum risk and minimum payoff tend to involve young males and the instruction “Hold my drink and watch this”…

It’s useful to know about these four strategies, since they make it easier to assess possible strategies systematically. For instance, if you’re dealing with a large bureaucracy, then there’s not much point in just emphasising the possible gain from a new strategy; what’s more salient to the bureaucracy is whether the strategy would be minimum risk or maximum risk, so you would need to discuss this question explicitly in your pitch.

*Repeated encounters versus single encounter*

You might be wondering whether the choice of strategy is affected by whether the interaction is a one-off, or whether there’s a sequence of interactions.

The answer is that this does indeed make a difference. Repeated interactions with the same individual or the same situation take you into a different area of game theory, which has been extensively studied.

Repeated interactions also take you into some other interesting areas outside game theory. One particularly far-reaching issue involves being able to recognise specific individuals or situations, so that you can use knowledge from previous encounters to help you decide how to handle the current interaction. There’s been a lot of work on *reciprocal altruism*, where individuals build up a relationship of trust with each other via a series of interactions. This occurs in species as varied as vampire bats and wasps, so it’s pretty ubiquitous. Conversely, if the fox in our opening example had previously tangled with the same eagle, and knew that the eagle was aggressive, then the fox would have a better chance of choosing the best strategy available.

A related issue is how many different individuals you can recognise, and how long you can remember them. Again, this has been studied in some depth. An example is *Dunbar’s number*, which is a figure for how many people an individual can maintain social links with (about one or two hundred). There’s a related concept of *familiar strangers*, i.e. people whom you know by sight, but not personally. In a small organisation, you can know everyone by sight; in a large organisation, you can’t.

This has implications for things such as the social visibility of strangers, and for social dynamics in small villages versus towns, for example. This is one reason that some types of criminals, such as pickpockets and confidence tricksters, tend to gravitate towards towns and cities, where they’re able to blend into an anonymous crowd.

There’s a fascinating discussion of repeated interactions in relation to “tells” in poker on Ed Brayton’s blog:

*Stochastic game theory*

A further refinement of strategy choice is to switch between two or more strategies on an arbitrary basis – for instance, the fox might use an aggressive strategy 80% of the time, and a non-aggressive strategy 20% of the time. ** Stochastic game theory** deals with this area, which is fascinating.

For brevity, I won’t go into detail about this area, but it’s a topic that makes a lot of sense of behaviours that might otherwise look contradictory.

*Evolutionarily Stable Strategies*

Some strategies might work well in the short term, but won’t survive in the long term, either because the statistical odds work against them in the long term, or because other individuals remember them, and use different strategies against them in later encounters. High-risk/high gain strategies, for instance, often perform well in the short term, but turn out to be disastrous later on (for instance, if the fox tries to scare off the eagle, but is killed by the eagle).

A strategy that’s sustainable in the long term is technically known as an *ESS* (*Evolutionarily Stable Strategy*). The full story is more complex, and overlaps with the concept of the ** Nash Equilibrium** that features in the movie

*A Beautiful Mind*. For our purposes, though, this definition gets the key point across.

It’s an important concept, because many attractive-looking political theories turn out not to be an ESS. An example is trying to introduce new ideas into government bureaucracies from business, on the assumption that businesses are more innovative and dynamic than bureaucracies.

The flaw in that reasoning is that the most visibly successful businesses at any given moment are likely to be the ones who are currently reaping the benefits of a high-risk/high gain strategy. When that strategy fails disastrously, as it will in time, the company involved goes out of business. That’s part of the natural cycle of business, where over a third of new companies fail in their first year, but it’s not an option for something like a national health or education service.

Large bureaucracies are usually risk-averse for very good reasons, so attempting to change their strategies via policies derived from the very different context of business is usually not a wise idea.

*Strategies*

The word “strategy” in everyday English often has implications of sophisticated overviews and long-term planning. However, it doesn’t have those implications in game theory.

The strategies in game theory don’t need to be explicit strategies that can be put into words. For instance, in evolutionary ecology, and in foraging theory in particular, there’s been a lot of work on the strategies used by insects.

Human beings can also use strategies that aren’t explicit or verbalised. This is an area where game theory overlaps with fields such as counselling and clinical psychology, in terms of how a person behaves in relation to other people. Berne’s model of ** Transactional Analysis**, for instance, is best known from his book

*Games People Play*. The mention of games in the title is no accident; Berne explicitly uses games as a model for the sorts of regular structures of interactions found in human behaviour.

I find Transactional Analysis particularly interesting because it uses the same underlying concepts to examine the complete range of strategies, from very small scale interactions (another of Berne’s books is called *What do you say after you say Hello?*) up to the scale of an entire human livespan. This has obvious overlaps with ** script theory**, which I’ll examine in another article.

*Currencies*

I opened with the example of a fox looking for food. The reason for this example is that it makes the point that “currency” doesn’t always mean “money”.

This is a very important point. It may look obvious once you’ve started thinking about calories as a currency, but it’s a point that’s often overlooked. This is a particular issue in relation to human behaviour, where there’s a strong tendency for economic models to translate everything into the currency of money (for instance, calculating the effects of a potential disaster in terms of financial cost).

This distinction can make sense of many behaviours that don’t make much sense in terms of traditional economics.

For instance, if you offer research participants a cup of good coffee and an upmarket chocolate biscuit as an incentive, they’re more likely to respond favourably than if you offer them enough money to buy a cup of the same coffee and an entire packet of the same biscuits.

If you assess this behaviour in terms of money, it’s illogical. If, however, you assess it in other currencies, it’s often the more logical choice. For instance, if you measure it in the currency of time or the currency of hassle, then choosing the money option would be more costly in terms of the time or the effort needed to go out and buy the coffee and biscuits. Similarly, if you measure the decision in the currency of social signals, the money is impersonal and low-value, whereas the coffee and biscuit are modestly valuable signals of friendship and courtesy.

**Closing thoughts**

Game theory is a powerful approach. The core concepts are elegant; however, that elegance can tempt some people to over-simplify the issues (for example, by ignoring currencies other than money).

It’s an approach that should be more widely known, and that can provide significant new insights into a wide range of phenomena.

*Notes*

You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.

There’s more about the theory behind this article in my latest book:

*Blind Spot*, by Gordon Rugg with Joseph D’Agnese

http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903

*Related articles and links:*

https://en.wikipedia.org/wiki/Evolutionary_game_theory

A classic book, showing how game theory can be applied to evolutionary ecology:

http://www.amazon.com/Evolution-Theory-Games-Maynard-Smith/dp/0521288843

Books about Transactional Analysis:

http://www.amazon.com/Games-People-Play-Transactional-Analysis/dp/0345410033

http://www.amazon.com/What-you-say-after-hello/dp/0394479955

Pingback: 150 posts and counting | hyde and rugg

Pingback: The Knowledge Modelling Book | hyde and rugg

Pingback: People in architectural drawings, part 2; the mathematics of desire | hyde and rugg

Pingback: The 28 versions of the Golden Age | hyde and rugg

Pingback: Death, Tarot, Rorschach, scripts, and why economies crash | hyde and rugg

Pingback: Logos, emblems, symbolism, and really bad ideas | hyde and rugg

Pingback: Why Hollywood gets it wrong, part 3 | hyde and rugg

Pingback: Modeling misunderstandings | hyde and rugg