This is the third post in a short series on semi-tacit and tacit knowledge. The first article gave an overview of the topic, structured round a framework of what people do, don’t, can’t or won’t tell you. The second focused on the various types of do (explicit) and don’t (semi-tacit) knowledge. Here, we look at can’t (strictly tacit) and won’t knowledge.
The issues involved are summed up in the diagram below.
The previous articles in this series looked at mental models and ways of making sense of problems. A recurrent theme in those articles was that using the wrong model can lead to disastrous outcomes.
This raises the question of how to choose the right model to make sense of a problem. In this article, I’ll look at the issues involved in answering this question, and then look at some practical solutions.
Note: This article is a slightly edited version of an article originally posted on our Search Visualiser blog on May 17, 2012. I’ve updated it to address recent claims about how Artificial Intelligence might revolutionise research.
So what is pattern matching, and why should anyone care about it?
First picture: Two individuals who don’t care about pattern matching (Pom’s the mainly white one, and Tiddles is the mainly black one (names have been changed to protect the innocent…)
Pattern matching is important because it’s at the heart of the digital revolution. Google made its fortune largely from the simplest form of pattern matching. Computers can’t manage the more complex forms of pattern matching yet, but humans can handle them easily. A major goal in computer science research is finding a way for computers to handle those more complex forms of pattern matching. A major challenge in information management is figuring out how to split a task between what the computer does and what the human does.
So, there are good reasons for knowing about pattern matching, and for trying to get a better understanding of it.
As for what pattern matching is: The phrase is used to refer to several concepts which look similar enough to cause confusion, but which are actually very different from each other, and which have very different implications.
It’s a fair question, if it’s being asked as a question, rather than as a complaint about the cosmic unfairness of having to study a topic that you don’t see the point of. Sometimes, it’s easy to answer. For instance, if someone wants to be a doctor, then checking their knowledge of medicine is a pretty good idea.
Other times, though, the answer takes you into deep waters that you’d really rather not get into, especially if there’s a chance of some student recording your answer and posting it onto social media…
Why do some answers take you into deep waters? That’s the topic of this article. It takes us into history, politics, proxies, and the glass bead game.
It’s a splendid example of nineteenth century ingenuity, right down to the name. What does it do? It’s intended to let you know if a storm is approaching. The way it does this is as wonderfully nineteenth century as the name. The Prognosticator is operated by twelve leeches, each of which lives in a bottle. When storms are approaching, the leeches become agitated, and climb out of the bottle. When they climb out, they disturb a piece of whalebone, which activates a bell. The more serious the risk of storm, the more leeches climb out, and the more bells ring.
It looks like a simple question, which ought to have a simple answer. In reality, understanding why students don’t learn takes us into concepts such as passive ignorance, active ignorance, systems theory, belief systems, naïve physics and cognitive biases. In this article, I’ll skim through some key concepts, to set the scene for a series of later articles that go into more depth about those concepts and about their implications both for education and for other fields.
Bucket image copyleft Hyde & Rugg, 2014; other images from Wikimedia (details at the end of this article)
Almost every academic article begins with a literature review. As is often the case in academia, this is a rich, sophisticated art form, whose complexities are often invisible to novices. As is also often the case in academia, there are usually solid, sensible reasons for those complexities. As you may already have guessed, these reasons are usually not explained to students and other novices, which often leads to massive and long-lasting misunderstandings.
This article looks at the nature and purpose of literature reviews. It also looks at some forms of literature review which are not as widely known as they should be.
It’s quite a long article, so here’s a picture of a couple of cats as a gentle start.
What are chunking, schemata and prototypes, and why should anybody care?
The second question has a short answer. These are three core concepts in how people process and use information, so they’re centrally important to fields as varied as education and customer requirements gathering.
The first question needs a long answer, because although these concepts are all fairly simple in principle, they have a lot of overlap with each other. This has frequently led to them being confused with each other in the popular literature, which has in turn led to widespread conceptual chaos.
This article goes through the key features of these concepts, with particular attention to potential misunderstandings. It takes us through the nature of information processing, and through a range of the usual suspects for spreading needless confusion.
Original images from Wikipedia; details at the end of this article
In an ideal world, everyone would always do everything perfectly. However, it’s not an ideal world.
So what can you do when you’re trying to make sense of a problem where there’s conflicting evidence, and you don’t have time to work through all the relevant information?
One approach is simply to decide what your conclusion is going to be, and then to ignore any evidence that doesn’t fit. This is not terribly moral or advisable.
Another is to do a meta-analysis, to assess the quality of the evidence as a whole. This sounds impressive; it also sounds like hard work, which it is, if you do a full-scale proper meta-analysis. Most academic researchers therefore use two types of meta-analysis.
The first is the quick and dirty type, which normally gives you a pretty good idea of whether the topic is worth spending your time on.
The second is the proper type, which is time-consuming, and requires sophisticated knowledge of research methods, including statistics.
This article, as the title subtly implies, is about the quick and dirty approach. It’s a flawed, imperfect approach, but it’s a good starting point in a flawed, imperfect world.