By Gordon Rugg
Why don’t students learn?
It looks like a simple question, which ought to have a simple answer. In reality, understanding why students don’t learn takes us into concepts such as passive ignorance, active ignorance, systems theory, belief systems, naïve physics and cognitive biases. In this article, I’ll skim through some key concepts, to set the scene for a series of later articles that go into more depth about those concepts and about their implications both for education and for other fields.
Bucket image copyleft Hyde & Rugg, 2014; other images from Wikimedia (details at the end of this article)
Some people view education as a process like filling a bucket of ignorance from the tap of knowledge; knowledge goes in and the bucket fills up. As anyone who has worked in education can attest from painful experience, this isn’t the full story. Often, education is more like trying to fit an angry small child into clean clothes (if you haven’t had that particular experience, you can imagine trying to fit a lively octopus into a string-mesh bag). Students often actively resist knowledge, or wilfully misinterpret it in ways so bizarre that you start seriously wondering what’s gone wrong inside their heads.
(A quick note about terminology: I’ll be using the terms “information” and “knowledge” fairly interchangeably throughout this article, for simplicity; for most of the issues described below, the same principles apply both to information in the strict sense and to knowledge in the strict sense.)
Passive ignorance and active ignorance
For some topics, in some ways, students appear very much like empty buckets into which knowledge can be poured. An obvious example is foreign language vocabulary, particularly when that language is not related to a language that the student already knows. Students in this situation can be described as being passively ignorant.
Often, though, you’re dealing with a situation that’s more like fitting pieces into a jigsaw puzzle than like filling a bucket. The students are actively trying to connect the new information with the existing knowledge structures in their heads. The students aren’t actively resisting what they’re being told, but neither are they passively receptacles.
This process of knowledge-fitting often encounters problems where the new knowledge doesn’t appear to fit with the old knowledge, and the student then has to work out whether they have misunderstood the new knowledge, or whether their own previous knowledge was mistaken. That’s hard work, and it takes time, and if the student is trying to do this while also taking notes on a flow of further information, then it’s a recipe for trouble, especially when the student’s model is very different from what the educator intended.
A third situation that can arise involves active ignorance. This is where the student assesses the new information, and makes an active decision to reject it as incorrect.
Here’s an example.
I was lecturing to a group of first year computing undergraduates about assessing software requirements and interface design. I went into some detail about the strengths of techniques such as think-aloud and laddering, which let you get at information that’s difficult or impossible to access via interviews and questionnaires.
At the end of the lecture, a couple of students came up to me and told me that the industry standard was to use interviews and questionnaires, so that’s what they would use for assessing requirements and designs.
In other words, they’d completely rejected everything that I’d just told them, and were proposing to use techniques that were unsuitable because of reasons that I’d explained in depth. What was going on there?
What was probably happening was this. The students were first years, and had probably had some work experience over the summer before starting university. From subsequent conversations, it appears that their relevant beliefs were as follows.
- Lecturers are the university equivalent of teachers.
- Teachers don’t have first-hand experience of the real world.
- We have first-hand experience of the real world via our summer work.
- What we’re being told is different from what we experienced.
- The simplest explanation is that the lecturer is unfamiliar with what’s happening in the real world.
- We will politely update the lecturer in private so as not to embarrass him in front of the class.
Phrased this way, it makes a lot more sense. It’s still wrong, but it makes a twisted kind of sense.
I’ve changed some parts of my lecturing repertoire to handle this issue, since it’s an important one in a discipline that has significant interactions with the practical world. Some points that I make explicitly in lectures are:
- Here’s how university lecturers are different from teachers.
- Here are some illustrative stories from my experience in industry.
- Here’s an overview of different industry sectors, and of how they look for different things in graduate job applicants.
- Here are some things about real-world software development that you won’t see in the textbooks.
- Here’s how the specialist techniques that I’m lecturing about will put you in a better position in the job market.
I’m careful to phrase these points in a way that can fit comfortably into most students’ belief systems, to minimise the risk of setting up an adversarial situation; this is a theme that I’ll explore in more detail below and in later articles.
Modelling the process
So, learning is an active process, and involves a constant process of the learner testing new information against their existing information structures, to see how well the new information fits. In this section, I’ll look at what insights we can gain from modelling this process more formally.
The jigsaw analogy of information-fitting is a familiar one, but it doesn’t capture some aspects of the process. A better way to model the process is to use a network diagram like the one below. I’ve deliberately made it as small as possible for simplicity and clarity.
In this network, there are green three circles in the centre, each of which is joined to both the other green circles. The green circle with the red border is joined to the black circle with the red border at the top of the network. The green circle with the blue border is joined to the yellow circle with the blue border at the bottom left of the network.
The rule here is that a circle can be joined to another circle if the centre or the border of both circles is of the same colour. I’ve included this rule to demonstrate how the information-fitting process involves decision-making on more than one criterion at a time.
Now, suppose that we want to add a new circle to the network.
Where would this circle go?
We could join it to the green circle with the black border (on the basis of border colour) or to the yellow circle with the blue border (on the basis of centre colour), or we could join it to both those circles, as in the diagram below. We don’t have to join it to both, but most people would probably do so, for reasons that they might have trouble putting into words, such as an idea that this would be a more tidy solution.
This is very similar to the process that actually happens in the neural network of the human brain, when new information is being assimilated, often using a different criterion for each connecting branch. The more connections that a new piece of information has with the existing network, the better its chances of becoming an established part of that network.
At a practical level, this is something that we can easily build into our education practices. If you present the students with multiple pieces of information relating to a key concept, then this improves the chances of that concept becoming solidly embedded in the students’ memories.
Similarly, you can use the known biases of human memory to help make a key concept easier to remember. For instance, you can use vividness bias (i.e. that we’re disproportionately more likely to remember something vivid than something dull) by adding vivid details when you describe that concept. Similarly, you can make use of episodic memory (i.e. memory for specific events that were experienced personally) by doing something unusual when explaining a key concept, so that students remember the episode. Sue Gerrard and I discuss this in more detail in one of our papers (details at the end of this article).
There are clear practical advantages for educators in having an accurate model of how memory associations work in the human brain. This also helps us to understand student resistance to learning, whether of specific topics or more generally, as discussed below.
Removal and change
So far, we’ve looked at addition to, and elaboration of, mental knowledge structures. In the case of the first year students’ misconceptions, my response was to provide them with a richer, deeper model of what happens at university, which didn’t directly challenge any of their beliefs. The nearest I came to a challenge was in explaining the difference between teachers and lecturers. This wasn’t a major issue, because the students were using analogical reasoning to extrapolate what lecturers might be like, as opposed to having firm beliefs about what lecturers actually are like.
The situation is very different when you have to remove or substantially change the contents of a well-established set of beliefs.
A classic example is the field of naïve physics. There’s been a fair amount of research on this over the years, much of it involving study of Wile. E. Coyote and the Road Runner (yes, seriously). Non-physicists have informal, unsystematic beliefs about how physical objects behave; these beliefs are often internally fairly consistent, but simply wrong. A classic example is whether a dense object dropped from a moving vehicle will fall straight down, or will follow a curving path. A student beginning formal instruction in real physics will have to unlearn their mistaken naïve physics beliefs. Unlearning is non-trivial.
That’s a significant problem when you’re just dealing with beliefs about what happens when you drop an object out of a moving train. It’s a much more profound problem when you’re dealing with beliefs that are part of a person’s core identity.
A classic example in computing is teaching the principles of good design.
A lot of these principles involve systematic, unglamorous concepts such as counting the number of keystrokes or mouse clicks needed for the most important tasks, and then looking for ways to reduce that number.
This can be profoundly threatening to a student whose self-perception is centred around being artistic and good at design. Such students often view themselves as having artistic talents that are rare, complex gifts. If you describe design to one of these students as something composed of simple principles that everyone can master, then you’re challenging a major source of their self-esteem.
This situation can easily escalate into a serious problem, if the student tries to maintain self-esteem by, for instance, arguing that good design involves more complex issues than the ones identified in the lecture. Some educators will spot the potential for escalation, and defuse the situation at this point with a response that saves face on both sides. Others, however, will treat this as a challenge to their authority, or as wilful ignorance on the student’s part, and will reiterate their initial point more forcibly. The potential for conflict is now horribly high.
So what can be done about such situations?
This type of escalation is a familiar concept in systems theory and in game theory, both of which provide simple, powerful explanations for a wide range of significant problems involving conflicts and misunderstandings, and both of which are nowhere near as widely known as they should be.
In two of my next articles, I’ll explore these concepts and their implications for education and for human systems more generally.
You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.
There’s more about the theory behind this article in my latest book:
Blind Spot, by Gordon Rugg with Joseph D’Agnese
Rugg, G. & Gerrard, S. (2009). Choosing appropriate teaching and training techniques. International Journal of Information and Operations Management Education 3(1) pp. 1-11.
Sources for images:
Fascinating first blog in a series.
I believe St Augustine of Hippo was reputed to have said something like …”there are three types of students, those who know, those who don’t know and those who think they know but really don’t”
It’s the students who think they know but really don’t are for me the tough ones.
I look forward to the coming blogs to illuminate for me from angles I hadn’t considered (from experience reading the blog)
The ones who think they know but really don’t are tough for me, too. I’ve found some solutions, but any suggestions are always gratefully received.
Why does your second image have a yellow circle with a blue border connected to a green circle with a black border?
Human error… I’ll correct it.
Thanks for spotting this.
I’ve now corrected the image.
Reblogged this on The Echo Chamber.
Pingback: 150 posts and counting | hyde and rugg
Pingback: The Knowledge Modelling Book | hyde and rugg