By Gordon Rugg
There is a general consensus that problem solving skills are a Good Thing. There’s general consensus that the education system needs to encourage them.
So far, so good. The consensus doesn’t go much further, though. It rapidly bogs down in long-running arguments about what problem solving skills actually are, and about how to measure them, and how to teach them. Those arguments follow a familiar pattern, with disputes about the True Definition, and invocations of Great Thinkers such as Socrates and Plato and Wittgenstein. The fact that those arguments have been rumbling on inconclusively for decades is a strong hint that maybe they’ve been framed in the wrong way from the outset, and that framing them differently might be a good idea.
That’s what this article is about. It describes more productive ways of handling these concepts, with particular reference to definitions, education theory and educational practice. It’s based on what happened the field of Artificial Intelligence tried to produce software that would find creative solutions to real world problems. It’s a story of how re-framing the issue with subtly but profoundly different concepts gave a powerful, efficient set of solutions that changed the world. It’s a story that most people have never heard of. It’s also a story that should transform the way that we tackle this aspect of education.
For generations of schoolchildren, the word “problem” is forever associated with an opening line about some number of men taking some number of days to dig a hole. The responses to this opening line fall into three groups:
- “This is going to be a simple arithmetic problem”
- “They might as well ask me how long it will take six beetles to walk across a barrel of tar wearing hobnailed boots on a Tuesday in Hanoi”
- “Yes, but calculating how long it will take to dig the hole with a different number of men isn’t that simple, and anyway, why is it only men who dig holes in these problems?”
The traditional education system handled these groups of responses in simple, time-honoured ways.
- The “It’s simple arithmetic” children produced the approved answers and received good marks.
- The “It might as well be beetles in hobnailed boots” children produced guesses or no answers, and ended up with poor marks.
- The “It’s not that simple, and anyway” children were told to pretend that life really was that simple, and were labelled as troublemakers.
This approach had a lot of problems, but it did have some sensible-looking foundations. There clearly were demonstrably correct solutions to the problems being posed, if you accepted the simplifying assumptions; the mathematics clearly worked. Those solutions were based on demonstrably correct logic and mathematics; they weren’t arbitrary social conventions. It was also clear that some people were better than others at solving this type of problem.
The implication was that skill in using logic to solve these small problems would transfer in some way into skill in solving bigger, real-world problems. In some cases, this was clearly true. A lot of civil and mechanical engineering, for instance, involves substantial amounts of mathematical calculation.
The situation was similar with regard to logical puzzles, such as the Tower of Hanoi. This is a game which requires the player to move hoops from one post to another via a limited set of permitted moves. Again, some people are much better at this than others.
The conventional wisdom was that these types of problems reflected some form of “pure” problem-solving ability that reflected a higher type of mental process. The meaning of “higher” in this context was never completely unpacked; it was part of a package of beliefs about civilisation, culture and intelligence that were widely accepted as self-evidently true, with the only debate being about details of the definitions.
These beliefs were increasingly challenged by radical freethinkers, from a sociological and political perspective. The ensuing culture wars have far-reaching implications for education policy, with heated debate raging about a wide variety of beliefs and assumptions.
Most people working in education are very much aware of those debates. However, very few people in education are aware of a different set of challenges to the traditional assumptions about problem-solving. Those challenges came from Artificial Intelligence (AI).
AI and problem solving
A key feature of AI is that it involves building software to do things. Building and doing things are both hard, unforgiving tests of a theory. They force you to think through everything involved, with no scope for slips of memory or unfounded assumptions. The results are often very unexpected.
A major case of unexpected results occurred in 1959, when Simon, Shaw and Newell tested their General Problem Solver software. It was intended to do what the name says, namely to solve problems in general, as opposed to specific problems such as the Tower of Hanoi. The core assumption was that using systematic, rigorous logic would allow the software to solve problems at least as well as humans could. That didn’t happen. The reasons gave AI researchers a powerful new set of insights into the nature of real, non-trivial problems, and into ways of tackling those problems. AI systems are now routinely used to handle a wide variety of real-world problems, ranging from assessing loan applications in banks to scheduling ship unloading in major ports.
With regard to problem solving, ironically, AI researchers found that formal logical problems such as the hole-digging problem and the Tower of Hanoi are actually computationally simple. The fact that most humans were bad at solving these problems wasn’t due to the problems being inherently difficult; those problems were actually inherently easy. They involved small numbers of entities, small numbers of rules, and small numbers of possibilities that the software needed to explore.
The large body of AI research since that early study has produced much the same type of findings, and has produced a much clearer understanding of the apparent paradox. Human beings are, ironically, pretty good at solving real-world problems, such as physically digging a hole in the ground with pick and shovel, which are extremely difficult computationally. Human beings are usually very bad at solving computationally simple “two men digging a hole” problems, probably because the human brain hardly ever encountered them during the course of its evolution, so it had no need to develop that specialist set of infrastructure.
The next section examines the differences between these two types of problem.
Constrained problems and real-world problems
In most respects, the constrained (i.e. limited and deliberately simple) problems used in maths exercises and in a lot of problem-solving workshops are diametrically opposite to real-world problems. Some of those differences are as follows.
Known, correct solution or not
Constrained problems typically have a single, known, correct solution. Most real-world problems don’t. Often, real-world problems have more than one solution, or have solutions that are good enough, as opposed to correct. With many real-world problems, it’s impossible to tell whether or not there is a solution.
When you’re dealing with constrained problems, it’s often possible to work out all of the possible outcomes, and to search through those for the route to the solution that you want. With real-world problems, though, the number of possible outcomes is often too vast to be worked out, so you can’t search through them all.
Algorithms versus heuristics
With constrained problems, there’s usually a method which is guaranteed to give you the answer, if you apply it consistently; this is known as an algorithm. With real-world problems, on the other hand, there usually isn’t an algorithm which is guaranteed to find the answer. Instead, you have to use rules of thumb (known as heuristics) which improve your chances of finding a solution if there is a solution to be found, but which don’t guarantee that you’ll find one.
Perfect, complete knowledge versus the real world
Most constrained problems involve a complete set of accurate and reliable set of starting information, such as how big a hole two men can dig in three days. Most real-world problems don’t have this luxury. Instead, you often have to deal with incomplete information, inaccurate information, probabilistic information and/or unreliable information.
Traditional formal logic isn’t much use when dealing with this sort of messy, horrible information; instead, you have to use heuristics and various forms of specialised logic which give you answers that have a specified likelihood of being correct, as opposed to being definitely correct.
So, where does that leave us?
The short version is that the term “problem solving” is being used to refer to two utterly different concepts, with most people being unaware of the profound differences between the two concepts.
This is a classic starting point for confusion, apparently contradictory evidence, and acrimony.
However, if we make this distinction explicit, and build it in to an education framework, then we can distinguish clearly between these two types of problem solving, and can make the correct choices about which type to teach in which places in a curriculum, and about how to assess the two types.
I hope that this article is helpful, and that it will help with the move towards a more evidence-based approach to education.
Notes and links
The painting in the left part of the banner is from Wikimedia Commons:
You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.