It’s logic, Jim, but not as we know it: Associative networks and parallel processing

By Gordon Rugg

A recurrent theme in our blog articles is the distinction between explicit knowledge, semi-tacit knowledge and tacit knowledge. Another recurrent theme is human error, in various forms. In this article, we’ll look at how these two themes interact with each other, and at the implications for assessing whether or not someone is actually making an error. We’ll also re-examine traditional logic, and judgement and decision-making, and see how they make a different kind of sense in light of types of knowledge and mental processing. We’ll start with the different types of knowledge.

Explicit knowledge is fairly straightforward; it involves topics such as what today’s date is, or what the capital of France is, or what Batman’s sidekick is called. Semi-tacit knowledge is knowledge that you can access, but that doesn’t always come to mind when needed, for various reasons; for instance, when a name is on the tip of your tongue, and you can’t quite recall it, and then suddenly it pops into your head days later when you’re thinking about something else. Tacit knowledge in the strict sense is knowledge that you have in your head, but that you can’t access regardless of how hard you try; for instance, knowledge about most of the grammatical rules of your own language, where you can clearly use those rules at native-speaker proficiency level, but you can’t explicitly say what those rules are. Within each of these three types, there are several sub-types, which we’ve discussed elsewhere.

So why is it that we don’t know what’s going on in our own heads, and does it relate to the problems that human beings have when they try to make logical, rational decisions? This takes us into the mechanisms that the brain uses to tackle different types of task, and into the implications for how people do or should behave, and the implications for assessing human rationality.

What goes on in the head sometimes stays in the head

A key issue is that the human brain uses different approaches to tackle different types of problem. One approach involves the sort of explicit step-by-step reasoning that is taught in school maths and logic lessons. The other doesn’t. These approaches are very different from each other. The good news is that the differences complement each other, and allow us to tackle problems that would be difficult or impossible with one approach alone. The bad news is that the brain has cobbled the two together in ways that made evolutionary sense at the time, but that don’t seem quite such a good idea now.

In some fields, the two approaches are well recognised and well understood. In other fields, their effects are recognised but the underlying mechanisms and processes are not generally known. In psychology, for instance, Kahneman’s distinction between System 2 and System 1 thinking corresponds closely to these approaches, but his book Thinking Fast and Slow doesn’t mention two key underlying mechanisms by name. In most fields, however, these approaches are little known and their implications are not recognised.

I’ve used the word approach deliberately, as a broad umbrella term that covers various mechanisms, processes and representations, that tend to co-occur with each other but are conceptually separate.

I’ll begin by describing two key mechanisms, and then proceed to a description of the two approaches.

Serial processing and parallel processing

Two key mechanisms for understanding how the human brain works are serial processing and parallel processing. Serial processing involves proceeding via one action at a time. Parallel processing involves proceeding via two or more actions simultaneously. The images below show this distinction. It looks simple and trivial; in fact, it is simple in essence, but its implications are huge, and nowhere near as widely known as they should be.

Serial processing: One action at a time

serial v4

Parallel processing: More than one action at a time

parallel v4

The distinction between serial and parallel processing is well understood in computing; it’s also routinely used in some fields such as project planning, where some parts of a project have to be done serially in sequence (e.g. first laying the foundations for a building, and then after that building the walls on those foundations) whereas other parts can be done in parallel (e.g. having several teams each building the walls on a different building on the same site at the same time).

Serial processing and parallel processing are closely linked with several other concepts, in particular explicit reasoning and crisp sets in the case of serial processing, and associative networks and pattern matching in the case of parallel processing. I’ll start by describing the combination of serial processing, explicit reasoning, and crisp sets, all of which will be familiar to most readers, even if they haven’t previously encountered those names for those concepts. I’ll look briefly at the obvious, well known limitations of each of these, and then look in more detail at some less obvious, less well known, limitations, whose implications have only become apparent within recent decades. That will set the stage for examining the combination of parallel processing, associative networks and pattern matching, which together can handle challenges that are difficult or impossible using the serial/explicit/crisp combination. I’ll then look at the implications of having both these approaches being used within the brain.

The serial processing/explicit reasoning/crisp set package

We’ll begin with a simple traditional example of how formal logic uses serial processing and explicit reasoning.

  • Socrates is a man
  • All men are mortal
  • Therefore Socrates is mortal

Formal logic has an extensive set of technical terms to describe this set of steps, but we’ll leave those to one side, and focus instead on some other ways to describe them. These involve issues whose importance went largely unrecognised until fairly recently. Three key issues for our purposes are that the reasoning follows a sequence of steps, that the reasoning is stated explicitly in words and/or symbols, and that the reasoning uses a crisp set categorisation where Socrates either is a man or isn’t, with no in-between values allowed.

So, why are these points important?

For some tasks, the sequence in which you perform the steps doesn’t matter. If we add 4 and 2 and 100, we’ll get the same result regardless of the order in which we add them together. For other tasks, however, sequencing is crucially important. For instance, if we multiply 4 by 2 and then add 100 to the result, we get a value of 108; if we instead add 2 to 100 and then multiply the result by 4, we get a value of 408.

Keeping track of the steps in sequences like these in your head is difficult, even for small problems like the ones above. It’s no accident that mathematics and formal logic are big on writing everything down, so that you’re not dependent on memory. Human working memory only has a capacity of a handful of items (about 7 plus or minus two). The multiplication example above requires remembering three initial numbers, two steps, another number (when you multiply two of the initial numbers together) and yet another number (when you add two of the numbers together). That takes you to seven items for a trivially small example. This is one reason that a lot of people hate maths and logic; they find those subjects difficult because of the limitations of the human cognitive system, but often blame the difficulty on themselves or on the subject area, rather than a simple cognitive limitation inherent in the human brain.

That leads us to the second point, about the reasoning being stated explicitly in words and/or symbols.

Spelling out the steps and the numbers explicitly in written words and symbols is an obvious way of getting past this bottleneck in human cognition, though it comes with the price of having to learn the writing system and specialist notations; another reason that a lot of people find maths and logic unwelcoming.

The combination of serial processing, reasoning stated in words/symbols, and writing is powerful, and lets you do things that wouldn’t be possible otherwise. These are very real advantages, and they can be applied to important problems in the real world, in long-established fields such as architecture, engineering and planning, which require calculations that most humans can’t do in their heads. It’s no surprise that this combination acquired high status very early in human civilisation, and still has high status in academia and in the rest of the world.

The third point, about crisp set categorisation, doesn’t necessarily have to be part of a package with serial processing and explicit reasoning, but has ended up frequently associated with them for largely historical reasons. Crisp sets involve sets (broadly equivalent to categories or groups) that have sharp boundaries; something either goes completely inside a set, or it goes outside it, with no intermediate states allowed.

In the Socrates example above, for instance, man is implicitly treated as a crisp set; either someone is a man, or they aren’t; similarly with mortal. The limitations of this approach were recognised from the outset, with ancient Greek philosophers asking questions such as at what point someone went from the category of “beardless” to “bearded” if these were bring treated as two crisp sets; just how many hairs were involved, and why that number of hairs rather than one more or one fewer? Up till the nineteenth century, such questions were generally answered by the academic establishment with the classic Ring Lardner response of “’Shut up,’ he explained”.

Problems with the serial processing/explicit reasoning/crisp set package

Then some researchers started using new approaches to logic, which turned out to be internally consistent, and to give very different and very useful results. Non-Euclidean geometry is one example; although many people loathed it at the time of its invention (Lovecraft used it in his horror stories as an indicator of eldritch monstrosity beyond sane human comprehension) it turned out to correspond with reality in unexpected ways; for example, satellite navigation systems depend on it. Another example is Zadeh’s invention of fuzzy logic, which provides a useful numerical and scalar way of handling concepts such as “fairly human” which the previous either human or not human binary systems couldn’t cope with.

Initial responses from the establishment to such concepts were mainly hostile, but gradually academic logicians and mathematicians came to accept them, and realise that they were important additions to the academic tool kit, as well as being compatible with the serial processing package. This news still hasn’t made it through to many school syllabus boards, so it’s possible to go through a high school education without hearing about anything other than old-style formal logic. This has far-reaching implications. A lot of public policy decisions by politicians, and decisions in organisations and in everyday life, are based on old-style logic and crisp sets, which are very limited and limiting in ways that we’ll look at later.

That’s one set of problems with the serial processing/explicit reasoning/crisp set package. There’s another set of problems which are less obvious, but which have equally important implications. These weren’t spotted until computing researchers tried tackling a problem which everyone assumed would be simple, but which turned out to be far more difficult than anyone had imagined. That problem was computer vision.

Back in the 1960s and 1970s, computer research was making dramatic progress in numerous directions. One direction that hadn’t been explored involved connecting a computer to a digital camera so that a computer could see its surroundings. That in turn would allow robots to see their surroundings and to make appropriate decisions about how to proceed; for instance, to identify obstacles and to plot a route round them. The first attempt to tackle this problem, at one of the world’s best research centres, assumed that this could be sorted out within a summer project. Over half a century later, this problem has turned out to be one of the most difficult, still only partially solved, challenges in computing. To demonstrate why it’s still a major problem, we’ll use the example of identifying the animal in a picture such as the one below.

1280px-Walking_tiger_female

https://commons.wikimedia.org/wiki/File:Walking_tiger_female.jpg

The parallel processing/associative networks/pattern matching package

If you try to identify the animal in the image using old-style formal logic, you might start with something like the following approach.

  • IF the animal is large
  • AND the animal is a carnivore
  • AND the animal has stripes
  • THEN the animal may be a tiger

There are some obvious weaknesses in this approach, such as the fuzzy definitional issue of what constitutes large, which we’ve mentioned above. A less obvious weakness involves issues that are easy to overlook because they’re so familiar. In this example, there’s the question of how you can tell that something is an animal; there’s also the question of how you can tell that something has stripes. If you ask someone how you can tell that something has stripes, they’ll almost certainly struggle to put an answer into words, even though they’ll know what stripes are, and can recognise stripes easily.

Early logicians didn’t get into the specifics of how you can define concepts like stripes, possibly because they viewed them as details as too trivial to deserve their attention. Early computer scientists had to tackle those specifics head-on to solve the computer vision problem. It rapidly became clear that they were far from trivial.

Cutting a very long story very short, it is just about possible to define some concepts like stripes using the serial processing package. However, doing so with that approach would be impractically slow even on the most powerful currently available computers, and would perform very poorly in anything other than tightly restricted settings, such as viewing items on factory production lines.

A much more effective and practical way of handling this problem is to use the parallel processing package. That’s what we humans use every day for tasks such as finding the front door key. It’s very fast, and it’s good at handling imperfect and incomplete information with “best guess” solutions, and it’s good at handling new situations, even if it isn’t always right. It’s very different, though, from the serial processing package, and it’s hardly ever mentioned in schools or in texts about problem solving or best practice. So what is the parallel processing package, and what are its implications?

The pair of images below show the core concepts. They represent input from a first layer of circles (on the right in both images) feeding into a central set of circles (in the centre in both images) which lead to a decision (the single circle on the left in both images). The first layer of circles represents cells in the retina of the eye; the second layer of circles represents cells in the brain. It’s a very, very simplified version of what actually happens in the brain, where billions of cells are involved, but it shows the key concepts.

unweighted weighted nets

This is a network, and the cells within each layer work simultaneously with each other (in parallel). The retinal cells do their work first, and then pass their signals on to the brain cells for processing, so there’s an element of serial processing going on as well, but we’ll leave that to one side for the moment. In the very simplified diagram above, some of the retinal cells are connected to more than one brain cell; others aren’t. In the image on the left, all the connections are shown with the same strength of connection to each other; in the image on the right, some connections are much stronger than others, as shown by thicker lines.

What now happens is that the brain learns to associate a particular combination of actively firing cells in the network with a particular concept, such as tiger. This is an associative network, working via pattern matching (i.e. the pattern of cells being activated). In this example, one set of associations in the network tells you that the image shows an animal; another set tells you that it has four legs; another set tells you that it has stripes. Each of these sets will usually have different weightings, which the brain adjusts via experience. All these associations are working at the same time. There are no words involved until near the end; there’s just neurons firing. You can’t explicitly state what’s going on in the mental processing; all you could do is say which cells are active. This is very different from the serial processing package, where you can state explicitly what the steps of reasoning are, and what information is being used.

For our purposes, these are the key concepts, though the full version is somewhat more complicated…

In summary, then, the parallel processing package typically uses associative networks in the human brain, performing pattern matching, using parallel processing. It’s very different from the serial processing package, which uses explicit reasoning based on words and symbols, using serial processing.

Putting the pieces together

So, in brief, the human brain uses two very different approaches for reasoning. This isn’t a simple binary distinction like the pop psychology claims about left brained and right brained thinking. It’s more complex. Almost all non-trivial tasks involve a combination of both approaches, with rapid switching beween the two for different sub-tasks and sub-sub-tasks.

Many tasks inherently require a predominant use of one approach; for instance, solving a complicated equation draws heavily on the serial/explicit/crisp package, whereas identifying an object seen from an unfamiliar angle draws heavily on the parallel/associationist/pattern-matching package.

However, sometimes it’s possible to perform a particular task mainly via either one of these approaches. This can lead to communication problems if two people are using different approaches for the same problem. Here are some simple examples.

First example: If you’re counting something, such as how many screws came with an IKEA pack, you’ll probably do the counting as a serial process, but first you’ll have to use pattern matching and parallel processing to identify the screws, as opposed to other similar-sized components that may be in the same pack.

Second example: If you’re adding together numbers in a written list, you’ll probably use serial processing for the maths, but you’ll use pattern matching and parallel processing to identify what each of the numbers is in the list. If you’re doing a complex task of this type, you’ll do a lot of switching between the serial processing package and the parallel processing package.

Third example: I used the word probably in the previous paragraph because for counting small numbers, up to about seven plus or minus two, you can use either serial processing or parallel processing. For counting small numbers of items, you can “just tell” what the number is without counting them in sequence. This happens via a process known as subitising, which is parallel processing applied to counting. This is probably the mechanism used by animals that can count; interestingly, the upper limit that most animals can count to is about seven.

Fourth example: It’s possible to learn social skills by observing people, and learning the patterns in their behaviour nonverbally via parallel processing. Some people use this approach, and tend to be fluent in their social behaviour, even though they may have trouble describing the social rules in words. Other people have difficulty with this approach, and instead learn the rules explicitly, often via asking someone to explain them.

We’ll return to this topic in more depth and breadth in later articles.

Conclusion

In summary, there are two packages of mechanisms, processes and representations that are used by the human brain. These packages are very different from each other. One package is good for problems that require explicit, step-by-step reasoning that can’t be handled within human working memory. The other package is good for identifying what objects are, and for working with information that is incomplete, uncertain, or otherwise imperfect, to produce a “best guess” solution.

The two packages usually complement each other fairly neatly, with each being used for different types of problem. Sometimes, though, both packages can be applied to the same problem, and this can lead to trouble.

An example we used above involves learning social skills; if someone isn’t able to learn social skills via the parallel processing package, and instead had to learn via the serial processing package, then their social skills probably won’t be as fluent as they would have been otherwise.

Another example involves judgments in everyday life, such as assessing how safe an action would be, or assessing the claims of a political candidate. Using the parallel processing package in these contexts makes people susceptible to numerous biases and errors. The problem is made worse when the brain associates a particular concept with other concepts that have strong emotional values. A classic example of this is politicians giving “word salad” speeches, where the statements in the speech don’t make much or any sense from the viewpoint of the serial processing package, but are all strongly associated with positive values from the viewpoint of the parallel processing package. For someone making a snap decision, the parallel processing package gives a quick answer, which can be very useful in some situations, and very misleading in other situations.

Once you know about these packages, you’re in a much better position to work with them in ways that make the most of their separate strengths, and that reduce the risk of being let down by their separate weaknesses. For instance, you can learn how to spot patterns via the parallel processing package that will give you rapid insights that wouldn’t be possible via the serial processing package. A lot of professional training, in fields ranging from medical diagnosis to mechanical engineering, involves learning such patterns. Similarly, you can learn how to use the serial processing package to tackle systematically problems that are difficult or impossible to handle via the parallel processing package; that’s a key part of most formal training.

That’s a brief overview; we’ll return to these issues in more depth, and with examples from more fields, in later articles.

Notes and links

You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.

You might also find our websites useful:

This one is the larger version: https://www.hydeandrugg.com/

This one is intended for beginners: https://hydeandrugg.org/

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.