The Rugg and Taylor “Cryptologia” article on the Voynich Manuscript

By Gordon Rugg and Gavin Taylor

Standard disclaimer: This article represents our own opinions, and does not reflect the views of Keele University or of Gavin’s employer, Heath Lane Academy, Leicester.

We’ve recently had an article published in Cryptologia about our work on the Voynich Manuscript, which was discussed in New Scientist. The Cryptologia article is behind a paywall, so in this article we’ve summarised the key points, for anyone who wants some more detail.

The background

Our involvement with the Voynich Manuscript started when Gordon needed a test of concept for the Verifier method that he had developed with Jo Hyde, for detecting errors in previous research into hard, unsolved problems.

The Voynich Manuscript is a book in a unique script, with odd illustrations, which had previously been believed to be an undeciphered text, either in a unidentified language or in an uncracked code. There were serious problems with both those explanations for the manuscript. If it was an unidentified language, then it was an extremely strange one. If it was an uncracked code, then it was either astonishingly sophisticated, or was based on a very unusual set of principles. The third main possibility, namely that the manuscript contained only meaningless gibberish, had been generally discounted, because there are numerous odd statistical regularities in the text of the manuscript, which everyone believed were much too complex to have been hoaxed.

Gordon’s work showed that this belief was mistaken, and that the most distinctive qualitative features of the Voynich Manuscript could be replicated using low-tech hoaxing methods. This resulted in an article in Cryptologia in 2004.

Gordon’s initial work, however, did not address the quantitative statistical regularities of the text in the manuscript.

Our recent article in Cryptologia addresses this issue, and shows how the most distinctive quantitative features of the VMS can be replicated using the same low-tech hoaxing methods as Gordon’s previous work. These features arise as unintended consequences of the technology being used, which produces statistical regularities as unplanned but inevitable side-effects.

Taken together, these two articles show that the key unusual features of the Voynich Manuscript can be explained as the products of a low-tech mechanism for producing meaningless gibberish.

bannerv2

Continue reading

The Knowledge Modelling Book

By Gordon Rugg

Over the last year, we’ve blogged about various aspects of knowledge modelling. That’s allowed us to go into depth about specific topics.

We’re now pulling that information together into a structured format, as an online book. This article contains the core structure of the book, with links to our previous blog articles about the topics within the book. Those articles cover about half of the material that the final version of the book will contain.

We’ve gone for this format, rather than a single downloadable document, because it’s more practical at this point. The knowledge modelling book covers a lot of topics, and even the current partial draft would be a very large document, with a lot of illustrations.

We’ll update this draft fairly frequently, via further blog articles. Some of those articles will be case studies showing how concepts from the book can be applied to real examples. Other articles will be about the broader and deeper context of the book; in particular, the introductory sections and the discussion sections for the main sections. At some point, we’ll put a more reader-friendly version onto the Hyde & Rugg website, which we’re currently updating.

We welcome constructive feedback and suggestions.

Continue reading

The limits to literacy

By Gordon Rugg

There’s widespread agreement that rates of illiteracy are high, and that something should be done about it.

And at that point, the agreement ends.

In this article, I’ll examine some widespread models of literacy and some of the main proposed solutions.

Reading ability shown as a greyscale, based on statistical figures. Darker shading represents greater problems with reading.the twenty percent

Continue reading

Parallel processing and “natural” learning: Inside the black box

By Gordon Rugg

There’s a widespread idea that before entering formal education, people learn via “natural” learning.

It’s a warm, cosy concept; “natural” evokes thoughts of wildflowers and meadows and beauty and fluffy kittens. There’s even a certain amount of truth in it; formal education does generally involve something different from non-formal education. However, when you start looking for clear, practical, explanations of how “natural” learning actually works, you encounter a sudden silence.

There are plenty of descriptions of what “natural learning” looks like, but there’s very little discussion of how it might work, in terms of plausible cognitive or neurophysiological mechanisms. This absence makes a sceptical reader start to wonder whether there actually is such a thing as “natural learning” and whether this strand of education theory is chasing something that doesn’t exist.

In fact, there is a well-understood mechanism that accounts for the phenomena being lumped together as “natural learning” and “formal learning” (or whatever term is being used in juxtaposition to “natural learning”). However, when you look in detail at this mechanism, it soon becomes apparent that using a two-way distinction between “natural” and “non-natural” is simplistic and misleading. This is one reason that the “natural/non-natural” debate in education theory is still rumbling on, after more than two thousand years of fruitless and inconclusive argument.

In this article, I’ll discuss the mechanisms of parallel processing and serial processing, and I’ll outline some implications for education theory and practice.

The joys of nature and of fluffy kittens – not always quite the same thing…

fluffy kittens2

Original images from Wikimedia

 

Continue reading

How long is an education good for?

By Gordon Rugg

There has been a lot of debate over the centuries about the purpose of education. The fact that the debate is still active suggests that either the question is unanswerable, or that it needs to be rephrased.

One way of looking at the problem is graphically. If we represent a lifespan as a timeline, then what insights does that give us about the possible purpose, or purposes, of education?

lifelinev2

That’s the topic of this article.

Continue reading

One hundred Hyde & Rugg articles, and the Verifier framework

By Gordon Rugg

This is the 100th post on the Hyde & Rugg blog. We’re taking this opportunity to look back at what we’ve covered and look forward to what comes next.

The image below shows some of the main themes and outputs so far, in the “knowledge cycle” format that underlies our Verifier framework for tackling human error. If you’ve come to this blog after reading Blind Spot, you might be pleased to discover that we’ve been covering the contents of Verifier here in more depth than was possible in the book, and that we’re well on the way to a full description.

In the image below you can see some of the main themes and topics we’ve covered so far in the “knowledge cycle” format that underlies our Verifier framework for tackling human error. If you’ve come to this blog after reading Blind Spot, it’s worth knowing that we’ve covered some of the the contents of Verifier in more depth here than was possible in the book, and that we’re well on the way to a full description.

The knowledge cycle, and topics that we’ve blogged about

overview and hundredth articlev2Copyleft Hyde & Rugg 2014

Continue reading