Why are we being examined on this?

By Gordon Rugg

It’s a fair question, if it’s being asked as a question, rather than as a complaint about the cosmic unfairness of having to study a topic that you don’t see the point of. Sometimes, it’s easy to answer. For instance, if someone wants to be a doctor, then checking their knowledge of medicine is a pretty good idea.

Other times, though, the answer takes you into deep waters that you’d really rather not get into, especially if there’s a chance of some student recording your answer and posting it onto social media…

Why do some answers take you into deep waters? That’s the topic of this article. It takes us into history, politics, proxies, and the glass bead game.

bannerv3

Continue reading

Advertisements

Truthiness, scienciness, and product success

By Gordon Rugg

This is a Tempest Prognosticator.

800px-Tempest_Prognosticator“Tempest Prognosticator” by Badobadop – Own work. Licensed under CC BY-SA 4.0 via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Tempest_Prognosticator.jpg#/media/File:Tempest_Prognosticator.jpg

It’s a splendid example of nineteenth century ingenuity, right down to the name. What does it do? It’s intended to let you know if a storm is approaching. The way it does this is as wonderfully nineteenth century as the name. The Prognosticator is operated by twelve leeches, each of which lives in a bottle. When storms are approaching, the leeches become agitated, and climb out of the bottle. When they climb out, they disturb a piece of whalebone, which activates a bell. The more serious the risk of storm, the more leeches climb out, and the more bells ring.

Continue reading

Passive ignorance, active ignorance, and why students don’t learn

By Gordon Rugg

Why don’t students learn?

It looks like a simple question, which ought to have a simple answer. In reality, understanding why students don’t learn takes us into concepts such as passive ignorance, active ignorance, systems theory, belief systems, naïve physics and cognitive biases. In this article, I’ll skim through some key concepts, to set the scene for a series of later articles that go into more depth about those concepts and about their implications both for education and for other fields.

bannerBucket image copyleft Hyde & Rugg, 2014; other images from Wikimedia (details at the end of this article)

 

Continue reading

Literature reviews

By Gordon Rugg

Almost every academic article begins with a literature review. As is often the case in academia, this is a rich, sophisticated art form, whose complexities are often invisible to novices. As is also often the case in academia, there are usually solid, sensible reasons for those complexities. As you may already have guessed, these reasons are usually not explained to students and other novices, which often leads to massive and long-lasting misunderstandings.

This article looks at the nature and purpose of literature reviews. It also looks at some forms of literature review which are not as widely known as they should be.

It’s quite a long article, so here’s a picture of a couple of cats as a gentle start.

cats on sofa

Continue reading

Trees, nets and teaching

By Gordon Rugg

Much of the debate on education uses diagrams to illustrate points being discussed.

Many of those diagrams are based on informal semantics.

The result is often chaos.

In this article, I’ll use the knowledge pyramid as an example of how informal semantics can produce confusion rather than clarity.

banner

Continue reading

Chunking, schemata and prototypes

By Gordon Rugg and Sue Gerrard

What are chunking, schemata and prototypes, and why should anybody care?

The second question has a short answer. These are three core concepts in how people process and use information, so they’re centrally important to fields as varied as education and customer requirements gathering.

The first question needs a long answer, because although these concepts are all fairly simple in principle, they have a lot of overlap with each other. This has frequently led to them being confused with each other in the popular literature, which has in turn led to widespread conceptual chaos.

This article goes through the key features of these concepts, with particular attention to potential misunderstandings. It takes us through the nature of information processing, and through a range of the usual suspects for spreading needless confusion.

bannerOriginal images from Wikipedia; details at the end of this article

Continue reading

The quick and dirty approach to meta-analysis

By Gordon Rugg

In an ideal world, everyone would always do everything perfectly. However, it’s not an ideal world.

So what can you do when you’re trying to make sense of a problem where there’s conflicting evidence, and you don’t have time to work through all the relevant information?

One approach is simply to decide what your conclusion is going to be, and then to ignore any evidence that doesn’t fit. This is not terribly moral or advisable.

Another is to do a meta-analysis, to assess the quality of the evidence as a whole. This sounds impressive; it also sounds like hard work, which it is, if you do a full-scale proper meta-analysis. Most academic researchers therefore use two types of meta-analysis.

  • The first is the quick and dirty type, which normally gives you a pretty good idea of whether the topic is worth spending your time on.
  • The second is the proper type, which is time-consuming, and requires sophisticated knowledge of research methods, including statistics.

This article, as the title subtly implies, is about the quick and dirty approach. It’s a flawed, imperfect approach, but it’s a good starting point in a flawed, imperfect world.

Continue reading