By Gordon Rugg
There’s widespread agreement that rates of illiteracy are high, and that something should be done about it.
And at that point, the agreement ends.
In this article, I’ll examine some widespread models of literacy and some of the main proposed solutions.
Reading ability shown as a greyscale, based on statistical figures. Darker shading represents greater problems with reading.
The image above shows what the statistics look like for literacy in the English-speaking world. About 20% of children in school have some form of reading problem, ranging from mild (light grey) to severe (black). The precise figure depends on definitions, which will be discussed below.
I’ve deliberately used grey/black in this figure, to reflect the dual assumptions that illiteracy is a bad thing and that it that needs to be eradicated. I’ll argue below that this assumption is subtly but significantly mis-framed; illiteracy does indeed cause problems, but this does not mean that attempting to eradicate it completely is the best response.
So, what are some of the current models of what illiteracy is, and of what should be done about it?
The “one more push” model
This model implicitly assumes that current methods have succeeded in making 80% of children literate, and that the remaining 20% can be made literate by just using more of the same methods – for instance, by simply scheduling more of the same type of reading lessons.
The image below shows the literacy rate as a red vertical line, with the implicit message that the line can be moved in the desired direction by just doing more of the same. There’s no greyscale in the horizontal bar; the implicit message here is that all children are much of a muchness as regards their potential ability to read.
The “sliding scale” model
This model assumes that not all reading problems are the same, and that some problems are more easily resolved than others. The image below shows this using a gradient from light (easily resolved) to dark (difficult to resolve). I’ve used green rather than grey to make the point that illiteracy can be viewed non-judgmentally; for instance, illiteracy might be due to a medical problem which is nobody’s fault.
In practical terms, this model predicts that there won’t be a straightforward relationship between the resources spent on combatting illiteracy and the results from that spend. Some causes of illiteracy will be fairly easy to tackle; others will be much harder and much more costly.
The “levels of difficulty” model
This model is similar to the sliding scale model, but it assumes the existence of separate levels of difficulty that can be distinguished from each other in some way. For instance, there’s a widespread distinction between functional illiteracy and illiteracy in the strictest sense, where “functional illiteracy” means that the child can read the individual words in a sentence, but can’t work out the overall meaning of the sentence.
In practical terms, this model makes it easier to identify specific targets for improvement.
The “thing called dyslexia” model
This model assumes that there is a condition, called dyslexia, which makes some children qualitatively different from others with regard to reading. The implicit assumed corollary is that this single condition could be treated if only we could find the (single) cure.
In practical terms, this means that children with dyslexia will need to be taught using methods tailored to suit the nature of dyslexia. There won’t be any significant improvement if children with dyslexia are given more lessons using the same methods that worked with other children; there’s actually a chance that this strategy would backfire, by making the children feel inadequate and stupid.
The “many things called dyslexia” model
This model assumes that there are multiple separate causes for reading difficulties, and that these diverse causes have often been lumped together in the literature under the umbrella category of “dyslexia”.
In practical terms, this model predicts that each cause of problems will need to be tackled using different methods. What works for one cause will not necessarily work for others, or may make the situation worse for others. If these different causes and solutions are lumped together as “dyslexia” then the results will be mixed together into an apparently incoherent mess, where some treatments appear to improve some cases of “dyslexia” but not others, and where the search for a single silver bullet goes on and on. If, however, we examine each cause separately, then we have a much better chance of being able to find and apply solutions.
When you look at the numbers, you start to see some interesting regularities.
In England and Wales, the rate of functional illiteracy has stayed pretty much constant at about 17 % to 20% since 1948, when consistently applicable records begin. It’s stayed the same across massive social changes, and wide changes in education theory and practice. That implies that there is something qualitatively different happening within functional illiteracy from within “normal reading,” something that isn’t being tackled by the standard methods used in schools for teaching reading.
There’s a large, sophisticated set of literatures on dyslexia and on the neurophysiology of reading. These show clearly that reading isn’t a simple, single skill that can always be acquired simply through practice. Similarly, they make it clear that there isn’t a single unitary condition called “dyslexia” – instead, there are numerous separate issues that can cause reading difficulties either individually or in combination with each other.
I won’t attempt to re-cover ground that these literatures have already covered well and in depth. Instead, I’ll look at some specific causes of reading problems that particularly interest me, and that arguably should be more widely discussed.
Interactions between impairments
An obvious cause of reading problems is visual impairment. This issue has been examined in some depth; the report below is a good example.
This report clearly differentiates between visual problems causing difficulties with reading (obvious, and well attested), and the claim that dyslexia is caused by subtle visual problems that can be corrected by e.g. coloured filters (where the evidence is weak and looks contradictory, if we view dyslexia as a single unitary causal condition, as discussed later in this article).
A less obvious, but potentially more serious, cause of problems is interaction between separate impairments. It’s possible for a child to have two or more impairments which are individually only minor problems, but which in combination cause major problems. For instance, a minor visual problem and a minor auditory problem could each have little or no impact on a child’s reading ability, but those same problems in combination could have a devastating effect because of interactions between visual input and the phonological loop in reading.
Although medical professionals are generally aware of the interactions problem, the implications for reading are easy to miss, for instance when a child has separate assessments for vision and for hearing, and is categorised as having only minor problems in each.
Phonology and orthography
One obvious but often overlooked source of problems is the match between spoken and written language. In some languages, there’s a good match between the spoken speech sound (phoneme) and corresponding written letter (grapheme).
English isn’t one of those languages.
English has over forty phonemes (the precise number depends on the categorisation you use) and only twenty-six graphemes. This means that in many cases, a single phoneme has to be represented by two or more graphemes (for example, the single phoneme represented by “th”). This in turn means that the spelling system imposes an extra cognitive load in reading English, compared to languages which have a closer mapping between phoneme and grapheme.
In addition, the shape of the writing system has implications for reading. The Roman alphabet contains numerous letters that are potential pitfalls for anyone who has difficulty in distinguishing left from right (a surprisingly high proportion of the population). In sans serif font, the four letters shown below are differentiated only by orientation. This is just asking for trouble. If each letter in the alphabet had a unique shape, and wasn’t a rotated form of any other letter in the alphabet, then this particular source of problems would be removed.
There’s a separate issue about commonalities in dyslexia across languages and orthographies, as in the article below. For brevity, I’ll leave that issue to one side.
The politics of spelling and favoured dialects
Another obvious way of reducing mismatches between spelling and pronunciation is via spelling reform. It’s a nice idea, but it soon runs into deep problems with dialectal differences.
In some dialects of English, for instance, the words “whet” and “wet” sound identical, and are clear candidates for spelling reform. In other dialects of English, however, these two words sound like clearly distinct words, and require different spellings.
Comprehensive spelling reform would only work in English if it was mapped onto one dialect, which then raises major political questions about marginalisation of children who don’t speak that dialect.
So, what are the limits to literacy?
The idea that we just need to pour more resources into doing more of the same is a non-starter. It may help some children, and it may look good politically, but it’s based on a mistaken assumption. The methods that teach 80% of children to read won’t also work for the remaining 20%. The reason that those 20% haven’t learnt to read is that those methods didn’t work for them, and so different methods are needed.
Searching for a single “cure for dyslexia” is also a non-starter. In political and humanitarian terms, the concept of “dyslexia” has been invaluable; it helped massively in reducing the stigma attached to non-literacy, and in raising awareness that non-literate children are not lazy or stupid. However, there’s a huge difference between viewing dyslexia as an outcome – that the person has difficulty reading, for whatever reason – and viewing dyslexia as a cause – the idea that there is a single poorly-understood medical condition called “dyslexia” that leads to difficulty in reading. The neurophysiological literature makes it clear that there are multiple causes for reading difficulties, each with different origins and with different implications for treatment and/or cure.
The most promising approach is to treat reading difficulties as a number of diverse conditions, each of which needs to be handled separately.
There’s also an underlying assumption in this debate that needs to be questioned. It’s the assumption that we should aim to eradicate illiteracy completely. I’m well aware of the problems that reading difficulties can cause. However, the best solution in some cases may not involve helping the individual to read. To take an extreme example that illustrates the point: Nobody would try to teach a completely blind student to read text. Instead, we would turn to technologies such as text-to-speech software. What happens if we extend this approach to children who for whatever reason are making no progress with “conventional” reading? This would probably be significantly more cost-effective, and would reduce stress and stigma on the child.
I’ll blog again about this at some point. One closely related topic I’m planning to address is the concept of learning language as being “natural” and of reading as “artificial”.
Notes and links
You’re welcome to use the Hyde & Rugg copyleft images above for any non-commercial purpose, including lectures, provided that you retain the Hyde & Rugg copyleft statement in them.
The linked article below looks at literacy and numeracy between 1948 and 2009. I’ve taken the 17% figure from here. There are several strands of evidence suggesting comparable literacy rates from the mid 19th century onwards, but the evidence is partial, so I’ve focused on the modern data.
A comprehensive, deep, overview of reading is Snowling & Hulme’s The Science of Reading: A Handbook (2007).
There’s more about the theory behind this article in my latest book: Blind Spot, by Gordon Rugg with Joseph D’Agnese.
Reblogged this on Primary Blogging.
Thank you for this really clear and comprehensive outline of the issues around reading. I’m aware the water is murky and the subject contentious. This has helped me make more sense of the different arguments. Thanks, again.
This is rapidly turning into one of my favourite blogs on the internet. Are you on Twitter?
I’m deliberately not on Twitter at the moment, because of the time it would absorb. At the moment, I’m focusing on publishing the main concepts in our work, so that it’s available as a resource for people like yourself. Some of these concepts are well-established in some fields but little known elsewhere; other concepts are new. When they’re put together into a coherent toolkit for thought they’re very powerful, so I’m trying to get the main toolkit components published before I move on to Twitter.
Pingback: The Perils of Premature Pigeonholing (or, What Shape is the Internet?) | hyde and rugg
Pingback: 150 posts and counting | hyde and rugg
Pingback: The Knowledge Modelling Book | hyde and rugg