This is the third post in a short series on semi-tacit and tacit knowledge. The first article gave an overview of the topic, structured round a framework of what people do, don’t, can’t or won’t tell you. The second focused on the various types of do (explicit) and don’t (semi-tacit) knowledge. Here, we look at can’t (strictly tacit) and won’t knowledge.
The issues involved are summed up in the diagram below.
This is the second in a series of posts about explicit, semi-tacit and tacit knowledge.
It’s structured around a four way model of whether people do, don’t, can’t or won’t state the knowledge. If they do state it, it is explicit knowledge, and can be accessed via any method. If people don’t, can’t or won’t state the knowledge, then it is some form of semi-tacit or strictly tacit knowledge, which can only be accessed via a limited set of methods such as observation, laddering or think-aloud.
Tacit knowledge is knowledge which, for whatever reason, is not explicitly stated. The concept of tacit knowledge is widely used, and has been applied to several very different types of knowledge, leading to potential confusion.
In this article, we describe various forms of knowledge that may be described as tacit in the broadest sense; we then discuss the underlying mechanisms involved, and the implications for handling knowledge. The approach we use derives from Gordon’s work with Neil Maiden on software requirements (Maiden & Rugg, 1996; reference at the end of this article).
In brief, the core issue can be summed up as whether people do, don’t, can’t or won’t state the knowledge. If they do state it, it is explicit knowledge, and can be accessed via any method. If people don’t, can’t or won’t state the knowledge, then it is some form of semi-tacit or strictly tacit knowledge, which can only be accessed via a limited set of methods such as observation, laddering or think-aloud. Because of the neurophysiological issues involved, interviews, questionnaires and focus groups are usually unable to access semi-tacit and tacit knowledge.
The image below shows the key issues in a nutshell; the rest of this article unpacks the issues and their implications. There are links at the end of the article to other articles on the methods mentioned in the table. The image below is copyleft; you’re welcome to use it for any non-commercial purpose, including lectures, as long as you retain the coplyleft statement as part of the image.
Old inventions seldom die; usually, they fade into the background, and then hang around there for a surprisingly long time.
In this article, I’ll look at how this happens with physical inventions; how it happens with innovative ideas; at what is going on underneath the regularities; and at what the implications are. A lot of those implications are important, and counter-intuitive.
In a previous article, I looked at the belief structures of the archetypal “crazy uncle” worldview. It’s a worldview with a strong tendency towards whichever option requires the minimum short-term cognitive load; for example, binary yes/no categorisations rather than greyscales or multiple categories.
One theme I didn’t explore in that article, for reasons of space, was premature closure. This article picks up that theme.
Premature closure is closely related to pre-emptive categorisation, which I’ll also discuss in this article. Both these concepts have significant implications, and both involve minimising short-term cognitive load, usually leading to problems further down the road. Both tend to be strongly associated with the authoritarian worldview, for reasons that I’ll unpack later.
So, what is premature closure? In brief, it’s when you make a decision too early, closing down prematurely the process of search and evaluation and decision-making. This takes various forms; knowing these forms improves your chances of stopping at the right point. For clarity, I’ll use examples within the context of common “crazy uncle” arguments.
The crazy uncle is a well-established and much-dreaded part of Western culture. There’s probably a very similar figure in other cultures too, but in this article, I’ll focus on the Western one, and on what is going on in his head.
Why are crazy uncles permanently angry, and keen to inflict their opinions, prejudices and conspiracy theories on other people? Some parts of the answer are already well covered in popular media and in specialist research, but other parts are less well known.
In this article, I’ll give a brief overview of the better known elements, and then combine them with insights from knowledge modeling, and see what sort of answer emerges.
The previous articles in this series looked at mental models and ways of making sense of problems. A recurrent theme in those articles was that using the wrong model can lead to disastrous outcomes.
This raises the question of how to choose the right model to make sense of a problem. In this article, I’ll look at the issues involved in answering this question, and then look at some practical solutions.
The previous articles in this series looked at how everyday entities such as a cup of coffee or a Lego pack can provide templates for thinking about other subjects, particularly abstract concepts such as justice, and entities that we can’t directly observe with human senses, such as electricity.
The previous articles examined templates for handling entities that stay where they’re put. With Lego blocks or a cup of coffee, once you’ve put them into a configuration, they stay in that configuration unless something else disturbs them. The Lego blocks stay in the shape you assembled them in; the cup of coffee remains a cup of coffee.
However, not all entities behave that way. In this article, I’ll examine systems theory, and its implications for entities that don’t stay where they’re put, but instead behave in ways that are often unexpected and counter-intuitive. I’ll use Meccano as a worked example.
In politics and religion, a common accusation is that someone is being hypocritical or inconsistent. The previous article in this series looked at how this can arise from the irregular adjective approach to other groups; for example, “Our soldiers are brave” versus “Their soldiers are fanatical” when describing otherwise identical actions.
Often, though, inconsistency is an almost inevitable consequence of dealing with complexity. Mainstream political movements, like organised religions, spend a lot of time and effort in identifying and resolving inherent contradictions within their worldview. This process takes a lot of time and effort because of the sheer number of possible combinations of beliefs within any mature worldview.
In this article, I’ll work through the implications of this simple but extremely significant issue.