The Rugg and Taylor “Cryptologia” article on the Voynich Manuscript

By Gordon Rugg and Gavin Taylor

Standard disclaimer: This article represents our own opinions, and does not reflect the views of Keele University or of Gavin’s employer, Heath Lane Academy, Leicester.

We’ve recently had an article published in Cryptologia about our work on the Voynich Manuscript, which was discussed in New Scientist. The Cryptologia article is behind a paywall, so in this article we’ve summarised the key points, for anyone who wants some more detail.

The background

Our involvement with the Voynich Manuscript started when Gordon needed a test of concept for the Verifier method that he had developed with Jo Hyde, for detecting errors in previous research into hard, unsolved problems.

The Voynich Manuscript is a book in a unique script, with odd illustrations, which had previously been believed to be an undeciphered text, either in a unidentified language or in an uncracked code. There were serious problems with both those explanations for the manuscript. If it was an unidentified language, then it was an extremely strange one. If it was an uncracked code, then it was either astonishingly sophisticated, or was based on a very unusual set of principles. The third main possibility, namely that the manuscript contained only meaningless gibberish, had been generally discounted, because there are numerous odd statistical regularities in the text of the manuscript, which everyone believed were much too complex to have been hoaxed.

Gordon’s work showed that this belief was mistaken, and that the most distinctive qualitative features of the Voynich Manuscript could be replicated using low-tech hoaxing methods. This resulted in an article in Cryptologia in 2004.

Gordon’s initial work, however, did not address the quantitative statistical regularities of the text in the manuscript.

Our recent article in Cryptologia addresses this issue, and shows how the most distinctive quantitative features of the VMS can be replicated using the same low-tech hoaxing methods as Gordon’s previous work. These features arise as unintended consequences of the technology being used, which produces statistical regularities as unplanned but inevitable side-effects.

Taken together, these two articles show that the key unusual features of the Voynich Manuscript can be explained as the products of a low-tech mechanism for producing meaningless gibberish.

bannerv2

The Voynich Manuscript: Background and overview

The Voynich Manuscript is usually described as having been found by Wilfried Voynich in 1912. It’s a hand-written book of about 240 pages (some pages appear to have been cut out, and others are fold-outs, so different definitions will result in different numbers of pages. The document is illustrated on most pages; the illustrations are usually described as a mixture of the prosaic (e.g. a water lily) and the bizarre. Radiocarbon dating gives a date of around 1420 for the vellum on which the manuscript was written.

All of the key points above are open to varying levels of debate.

The circumstances in which Voynich obtained the manuscript are debatable; he probably found it in a semestery in Italy, but there have been suspicions that this was just a cover story that he used to conceal his actual source, for commercial or other reasons.

The missing pages may well have been cut out because they featured striking artwork, but Gordon has argued in another article that they may have been removed as part of a hoax.

Many of the illustrations appear bizarre to most modern readers, but if you compare them with the illustrations in mediaeval manuscripts, particularly the unofficial illustrations in the margins of those manuscripts, then the Voynich Manuscript illustrations are actually not remarkably odd. The banner image above shows two examples of marginalia from other mediaeval manuscripts. Here are some more from the Liber Floridus, of about 1090 to 1110, juxtaposed with similar images from the Voynich Manuscript.

comparisons1

We’re not claiming that the Voynich Manuscript was a hoax based on the Liber Floridus, but we are pointing out that the Voynich Manuscript illustrations are far less unusual than is often claimed.

We don’t have any arguments with the carbon dating for the vellum, but for anyone trying to work out when the manuscript was produced, it’s important to understand that carbon dating gives a probable date with a range of uncertainty. It doesn’t give an absolute earliest possible date and latest possible date (e.g. “beween 1400 and 1440”). Instead, the further you move from the most likely date, the lower the chances that the item actually comes from that date; for instance, if the most likely date from carbon dating is 1420, then you can calculate how likely it is that the artefact actually dates from 1440, or 1460, or 1500, or whenever.

There’s also the issue of when the manuscript was actually created, which may have been long after the date when the vellum was produced, as Gordon has argued here.

That’s the brief background of the manuscript itself. There are numerous odd statistical features in the text of the manuscript, some of which Gordon addressed in his 2004 article. In that article, he also addressed the feasibility of producing the manuscript as a hoax. There’s more about that issue in this series of blog articles.

His conclusion was that a hoax was completely feasible, both logistically and financially. However, these articles did not address all of the statistical features of the text in the manuscript. Our recent article addresses those remaining features. The key points we made in the article are as follows.

The quantitative features addressed in our Cryptologia article

Gordon has blogged previously about some of the quantitative features of the Voynich Manuscript in the articles below.

https://hydeandrugg.wordpress.com/2013/06/24/the-voynich-manuscript-non-random-word-sequences-as-a-byproduct-of-hoaxing/

https://hydeandrugg.wordpress.com/2013/06/23/the-voynich-manuscript-emergent-complexity-in-hoaxed-texts/

https://hydeandrugg.wordpress.com/2013/06/22/verifier-voynich-and-accidental-complexity/

Word structure, and statistical properties of words

This part of our Cryptologia article pulls together various evidence about how the word structure and statistical properties of text in the Voynich Manuscript could be hoaxed.

The word structure of [optional prefix] [optional root] [optional suffix] is similar to Latin and other Indo-European languages, apart from the odd feature of the root not always being present in Voynichese text.

In our article, we show how creating syllables semi-systematically by increasing their length will lead to a binomial distribution of word lengths, which occurs in Voynichese. For instance, the following syllables all quite commonly occur in Voynichese:

Prefix: o, ol, olo

Root: k, ke, kee

Suffix: y,dy, ldy

If you combine these syllables in all the possible permutations, the resulting distribution is the same, on a small scale, as the distribution of word lengths in Voynichese.

Line structure, and statistical properties of lines

This part of our article shows how different ways of populating a table with syllables will affect various statistical features of the text produced using the table. A key point to note is that the table and grille method does not produce truly random output. This is a key feature of the method, and addresses the long-recognised point that the text in the Voynich Manuscript is not a random assemblage.

Zipf’s law

A common criticism of Gordon’s previous work was that it did not address the way that text in the Voynich Manuscript follows Zipf’s law (a curve for word frequency which is followed by real languages).

In this part of our article, we were able to show via Gavin’s work that three different sets of meaningless gibberish produced using the table and grille technique all follow Zipf’s law, and are comparable to a range of real natural language texts in their Zipf’s law curves.

This is arguably the most significant part of the article. Zipf’s law had previously been regarded as strong evidence for the text of the Voynich Manuscript containing meaningful content. Our findings demonstrate that this argument is actually weak.

Distribution of syllables within the VMS

In this section, we demonstrate that the distribution of common syllables within the Voynich Manuscript is very different from the distribution of common syllables in real human languages, but very similar to what would be predicted from the production of meaningless gibberish produced using tables and grilles, with several changes of table for logistical reasons.

One of Gordon’s previous blog articles also discusses this feature.

Closing thoughts

In conclusion, we argue that all the most striking features of the Voynich Manuscript can be produced by using tables and grilles to produce meaningless gibberish.

This set of results does not prove that the Voynich Manuscript is a hoax, but it does show that a hoax would be easily feasible. Our estimate, based on Gordon’s use of the method to produce hand-written illustrated pages, is that the entire manuscript could have been produced in ten weeks by one person acting alone, or in about half that time working with an accomplice. Gordon examined the economic feasibility of a hoax in this blog article and concluded that it was well within the range of costs and expected payoffs from documented art hoaxes.

When was it made?

This method of producing gibberish has some implications for estimating the date when the Voynich Manuscript was produced.

The carbon date for the vellum of the manuscript is around 1420, with the usual range of confidence that comes with radiocarbon dates. However, this does not say when the manuscript itself was produced. Rich SantaColoma has shown that obtaining large quantities of old vellum is quite easy, and was easy in antiquity. It is therefore perfectly possible that the manuscript was written on already-old vellum, either as an innocent use of available material, or as part of a deliberate attempt to make the manuscript look aged.

The nature of the hoaxing method described here is consistent with a hoax significantly after the 1420s because of the features that the method is producing, such as some letters, some syllables and some words being more common than others.

These features are consistent with what a relatively sophisticated cryptographer would want to reproduce, which implies a date of 1470s or later. Cryptography in the 1420s was comparatively crude, so a hoaxer at the time when the vellum was produced would be unlikely to pay much attention to these features. In the 1470s, however, cryptography went through a first golden age, with significant advances driven by conflict between the Italian city states.

This mismatch between the carbon date and the features being produced might be due to a 1470s hoaxer using old vellum just because it was available. It might also, however, be a significantly later (e.g. 1580s) hoaxer using old vellum and textual features to make the manuscript look like something from the 1470s, and accidentally overshooting with the age of the vellum.

There are some other features of the manuscript which could throw light on whether or not it is likely to be a meaningless hoax. A key question is whether there are significant numbers of erasures and corrections in the text. If the text contains meaningful content, then we would expect there to be numbers of erasures and corrections comparable with other documents of the fifteenth or sixteenth century. If, however, it contains only meaningless gibberish, then we would expect few or no erasures and corrections. As far as we know, the pages that have so far been examined show no erasures or corrections, so the balance of evidence is consistent with a meaningless hoax. In anything relating to the Voynich Manuscript, however, few things stay constant for long…

Notes and links

We’ve used the images from the Liber Floridus under fair use terms, as low-resolution images that are already in the public domain, being used as part of an academic study.

Other resources:

There’s more about the theory behind this article in my latest book:

Blind Spot, by Gordon Rugg with Joseph D’Agnese

http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903

You might also find our website useful:

http://www.hydeandrugg.com/

Overviews of the articles on this blog:

https://hydeandrugg.wordpress.com/2015/01/12/the-knowledge-modelling-book/

https://hydeandrugg.wordpress.com/2015/07/24/200-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/09/19/150-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/04/28/one-hundred-hyde-rugg-articles-and-the-verifier-framework/

12 thoughts on “The Rugg and Taylor “Cryptologia” article on the Voynich Manuscript

  1. Good day!
    I don’t agree with Gordon Ruug.
    The manuscript is not written with letters and characters denoting letters of the alphabet one of the ancient languages. I picked up the key, which in the first section I could read the following words: hemp, wearing hemp; food, food (sheet 20 at the numbering on the Internet); to clean (gut), knowledge, perhaps the desire, to drink, sweet beverage (nectar), maturation (maturity), to consider, to believe (sheet 107); to drink; six; flourishing; increasing; intense; peas; sweet drink, nectar, etc. Is just the short words, 2-3 sign. To translate words with more than 2-3 characters requires knowledge of this ancient language. The fact that some signs correspond to two letters. Thus, for example, a word consisting of three characters can fit up to six letters of which three. In the end, you need six characters to define the semantic word of three letters. Of course, without knowledge of this language make it very difficult even with a dictionary.
    If you are interested, I am ready to send more detailed information, including scans of pages showing the translated words.
    Nicholas.

  2. Pingback: Le manuscrit Voynich, un livre bien mystérieux - Le Dicopathe

  3. Dear Professor Rugg, if you have not done this already, doesn’t your table and grille methodology lend it itself to automation? It seems to me that it would be feasible and probably not difficult to write a program in which the inputs were the VMS glyphs (on some appropriate definition) and the output was a manuscript of the same length as the VMS, with the same thematic sections. The program would need to replicate (but fuzzily, i.e. with some randomness) all the commonly known “laws” about, for example:
    – the lengths of words
    – the lengths of lines
    – the differences between the thematic sections of the VMS
    – the differences between Voynich A and B
    – and for each glyph, its frequency of occurring (a) at the beginning of a word (b) at the end of a word (c) in the middle of a word (d) at the beginning of a line (e) in the first half of the line (f) before or after or in conjunction with another glyph (cf. “8am”, “89”, “4o” etc), (g) … …
    It would then be instructive to make a statistical comparison of the program’s output with the actual VMS.
    With best regards
    Bob Edwards

    • Good point; one of my students, Laura Aylward, did just this. We found that the output was very sensitive to tweaks in the initial parameters, with apparently small changes in the table creation algorithm producing strikingly different outputs, even using the same initial set of syllables to populate the tables.
      I found that using the table and grille approach manually made sense of some odd features of Voynichese, as a result of human factors such as errors or shortcuts when tired, that would not show up in an automated version of the approach.

    • I don’t think it is; the system for undergraduate dissertations at Keele is different from the one from PhD dissertations.
      Gavin Taylor and I included some examples of the outputs from her work in our Cryptologia paper, which you may be able to access online. If you’re particularly interested in this, I can contact you offline and go into more detail.

      Here are a couple of examples of output from her work, one where the algorithm for table generation was very tightly constrained, and one where the algorithm used the same set of syllables and syllable frequencies but was very loosely constrained, both using the same sets of grilles. We were struck by how different the outputs looked, in particular as regards variability in word lengths and number of “blank” words.

      Output from the text generation software in high-structure setting

      The words are separated by dots. Where the text was a “blank” word, i.e. only empty cells were selected, this is indicated by two dots. Note the high proportion of blank words, and also the variability of word lengths in this setting.

      .ochey.ochedy.ochey.ochey.ochedy.ochey.ochey.ochedy.ochey.
      .kdy…kdy…kdy…
      dy..she.dy…dy…oldy.
      qo.qo.qokdy.qo.qoshe.qokdy.qoaiin.qo.qokdy.qoal.
      y.dy.y.y.dy.y.shey.dy.oly.y.
      kdy.aiin..kdy..d.kdy..sheal.kdy.
      .she.keedy..aiin.keedy..ol.keedy..
      qo.qokdy.qo.qoshe.qokdy.qo.qo.qokdy.qo.qot.
      ochedy.ochey.ochey.ochedy.ochey.ochey.ochedy.ochey.ochey.ochedy.
      ..kdy…kdy.al.she.kdy..

      Low structure setting:

      qoqochey.kedy.qokdal.qodor.skeal.dy.ltdy.chedy.otdy.ykeeal.
      orshey.qoain.shcthain.olchdy.cheekdy.chdy.ochey.ky.okol.ocheky.
      dolshey.qochedy.qochey.keey.oaiin.qoteeas.ycthey.pcheoly.qoshedy.yy.
      qochey.cthdy.kody.tchdy.qoteaiin.oaiin.qoky.qokd.keal.qokeeal.
      qokdy.sheaiin.oy.chaiin.kor.kdy.qoky.qopdy.qoshey.chedy.
      oshdy.qosheaiin.tedy.olkeeal.olaiin.ky.key.chckheal.qokdy.qokedy.
      okchaiin.qoshedy.qoky.qokeear.oqoshedy.al.okal.qoshol.oddsheaiin.dycheear.
      ky.qochecthan.chdy.yshckhedy.lckhed.kol.aiin.qoky.lkeaiin.ka.
      y.chckhy.kal.cheer.chey.kol.dy.sheealol.oky.alchear.

  4. Dear Professor Rugg
    Many thanks for these clarifications. I will search for your and Gavin Taylor’s article “Hoaxing statistical features of the Voynich Manuscript” in Cryptologia. If it’s on Jstor, I will be able to access it through cantab.net.
    In the meantime, if I may, I’d like to elaborate my ideas for further investigation of the origins of the VMS.
    I’m assuming that the hypothesis to be tested is that the producer or producers of the VMS created the manuscript by means of an algorithm. (I think it’s not necessary to use the term “hoax”, since that involves a hypothesis about the intentions of the producers, which is a separate issue.)
    I think that it’s a feasible objective to replicate that algorithm, not in the sense of recreating the VMS or even a page of the VMS, but in the sense of producing a document of the same length and structure as the VMS, which in some statistical sense (to be defined) is equivalent to the VMS.
    You and Mr Taylor have demonstrated that an algorithm (namely a structured table of glyphs combined with a Cardan grille) can produce a manuscript which has many of the statistical attributes of the VMS. You also demonstrated that a simple algorithm can generate the appearance of complexity in the output.
    (In passing: no doubt you’re familiar with the Mandelbrot set.)
    I take your point that manual operation of the algorithm gives a better intuitive feeling of how the producers, if using an algorithm, would have worked. The reason that I suggested automation of this operation is that in order to replicate such an algorithm, I envisage the development of a computer program which would be run many times (I’m guessing, thousands of times), with variations on the parameters.
    My initial thoughts are that the program would emulate a process along the following lines:
    • The producers set out to create a manuscript of about 240 pages divided into six sections. (I think it’s not necessary to assume a target for the number of glyphs or the “word” count.)
    • The producers intend to develop slightly different algorithms for each of the six sections.
    • The producers develop a glyph set or “alphabet” containing all the glyphs to be used. Possibly there are six subsets with substantial overlap but slight variations, corresponding to the six sections.
    • The producers construct six sets of dice, one for each of the sections. (Since dice have existed for thousands of years, I’m suggesting dice as a simple alternative to a table and grille, in order to avoid an assumption about the date of this process.) (Occam’s Razor)
    • For each section, the corresponding set of dice contains enough individual dice for every glyph (needed for that section) to be inscribed at least once on a face of a die.
    • Each die has six faces. (The program could possibly vary this assumption.)
    • On each face there is inscribed (a) nothing (b) a single glyph (c) a pair of glyphs (d) a triplet or more glyphs, in some proportions determined by the producers.
    • In preparation for creating a line, a producer takes a number of dice from the set. This number is constrained in some way (for example, not more than four dice); the program has to embody and vary the constraints.
    • The producer rolls the dice and records, in rough copy, the glyphs that appear on the top faces.
    • There are several algorithms for dealing with blank faces:
     Treat a blank face as a space between “words”
     Ignore blank faces and concatenate the glyphs on non-blank faces.
     If all faces are blank, roll the dice again
     Etc …
    • Conversely, to create spaces between “words”, the producer can either use a blank face from a roll of the dice, or insert a space after every roll of the dice in which no blank faces have appeared.
    • When sufficient glyphs and spaces have been recorded to complete a line, the producer transcribes the line from the rough copy to a good copy. This good copy is either on an intermediate document for approval by the quality controller, or on the final document; this issue is not critical, but addresses Currier’s “functionality of the line”.
    • After a line has been completed, the producer starts another series of rolls of the dice for the subsequent line.
    This process continues until enough text has been generated to fill the 240 pages.
    The program would have to embody variations on this process to deal with labels and circular text.
    Your point about sensitivity to initial assumptions is, I think, an opportunity rather than a constraint. If in the course of many iterations of the program, the output diverges substantially, in a statistical sense, from the actual VMS, the parameter values that create divergences can be eliminated.

    • Excellent points. Laura made a very good start on this, within the time constraints of an undergraduate project, and demonstrated that a computer-based algorithm can produce text which gives significant new insights into how the manuscript could have been produced. As you say, the next stage is to explore more systematically the outputs produced using different versions of the algorithm.

      If you’re interested in doing this, I’ll be more than happy to help you.

      One thing I’d strongly advise is getting some hands-on experience of producing text manually using the materials available in the fifteenth and sixteenth centuries, combined with a table and grille and with dice. Blank physical rewriteable dice are readily available. I considered dice, coins and prism-shaped wands (all used in mediaeval gaming) as possible mechanisms for producing the manuscript, but ended up concluding that tables and grilles were much less effort to use once the tables and grilles were set up.

      The manual process makes sense of many features of the manuscript which would otherwise be puzzling. One example is systematic differences in word lengths within a line, which can occur as an unintended and probably un-noticed side effect of filling in a table column by column, and also from the temptation to produce text “in your head” to finish a line if the table doesn’t produce enough words to complete a line. This process also gives useful insights into the difficulties of planning foliation correctly the first time that you produce a bound multi-page document.

  5. Dear Professor Rugg, Many thanks. I will buy some blank rewritable dice.
    I would like to develop a strategy for writing on the faces of the dice. I think it may be productive to include not only single glyphs, but also pairs of glyphs, since there are clearly pairs that occur vastly more often than others. For example, to use the V101 alphabet:
    ae occurs 2606 times
    ai occurs 209 times
    am occurs 3390 times
    an occurs 1551 times
    ap occurs 715 times
    ay occurs 2746 times
    az occurs 481 times
    and no other pair beginning with a occurs more than 125 times.
    As I don’t want to reinvent the wheel, may I ask whether anyone has done a complete count of all occurrences of glyph pairs?

    • There’s been a lot of work on occurrences of glyphs, glyph pairs and syllables; Stolfi’s site is a good place to start.

      From memory, there were about a couple of dozen common prefix syllables, with the frequency dropping off rapidly after that, and similar numbers and distributions for midfix/root syllables, and also for suffix syllables. That’s a tractably small number, and is why I went for syllables rather than digraphs as a base unit; that choice also made it easier to replicate Voynichese word structure.

      With dice, at six faces per die, you’d need four or five dice to handle the most common prefix syllables, and similar numbers for midfix/root syllables and for suffix syllables. That’s feasible, but it gets into complications about how to select which die to select from each batch you roll, and further complications about how to handle rarer syllables.

      I’ll be very interested in hearing what you find. There may well be advantages to dice that I hadn’t considered.

      • Dear Professor Rugg, many thanks for your reply. I’m not insisting on dice. It’s just easier for me to visualise a process based on dice. Secondly, I suspect that any randomizing process could be constructed from a combination of dice-based processes. As an illustration, for someone like myself who does not know programming, I go to Excel for a simple randomising process. For example:
        =1+INT(6*RAND()) yields a random integer between 1 and 6;
        or more generally, =1+INT(n*RAND()) yields a random integer between 1 and n (thus emulating the throw of an n-sided die);
        and I think it would be straightforward to generalise the function to emulate multiple throws of dice (for example, right-filling the cell across ten columns would emulate a simultaneous throw of ten dice); and down-filling that block for 5,000 rows would emulate repeating that throw 5,000 times; and one could further generalise to a randomised selection of a fixed or flexible number of dice from a larger set of dice (which would emulate grabbing a handful of dice from a bowl).

        With that preamble, my working hypothesis is along the following lines:

        * The Voynich manuscript was constructed by means of an algorithm or several algorithms (maybe up to six, corresponding to the six sections; or maybe two, corresponding to the two Voynich languages).
        * We could think of each algorithm as having three components:
        • Structure (meaning, to pursue the dice analogy, concepts such as the number of faces on each die; the number of bowls of dice; the number of dice drawn from the bowl for each throw)
        • Content (meaning what is written on the faces of the dice)
        • Randomisation (meaning the throwing of the dice).

        * The functional unit of the manuscript is the line (in most cases, a string of characters delimited by the width of the page). We could think of lines broadly to include circular text (a string of characters delimited by the circumference of a circle), or a label (a short string of characters delimited by the size of the corresponding image).

        * As I proposed in an earlier message: the line may have the function of quality control. That is, a producer creates a line in rough copy, passes it for approval or amendment to a quality controller, who passes it to a producer (the same or another) for writing the fair copy.

        * I propose that there are no functional units smaller than the line. Therefore, there are no words in the Voynich manuscript. The strings that resemble natural-language words (i.e. strings of glyphs preceded or followed by spaces) have no function. The spaces are generated by the algorithm (to return to the dice analogy: a blank face on a die generates a space in the string).

        * The structure consists of, or can be emulated by, the following:
        • Defining the number of faces per die (could be six, or could be another number)
        • Creating a set (a “bowl”) of dice which contains at a minimum all the individual glyphs, and very probably, all or most of the pairs of glyphs; and possibly some triplets or longer combinations of glyphs, in the frequencies in which they are intended to appear in the script; I suspect that a few hundred dice are needed
        • Delimiting the number of dice that are to be drawn from the set for any one throw (a number such as four or five sounds reasonable).

        * The randomising process is as follows:
        • selecting the throwing dice from the bowl
        • throwing the selected dice in such a way that they can be read from left to right
        • recording the uppermost face of each die, from left to right, including blanks
        • repeating the throw until the string is long enough to make a line (which can be truncated if need be)
        • after quality control, proceeding to the next line.

        To respond to your point about which die to select: in this hypothesis, the producer selects and records all the dice that he or she rolls (including the dice that yield a blank face). There is no decision making, except when the end of a line is reached. Frequency or rarity of glyphs is handled by the content of the faces.

        The content is, I think, the most labour-intensive part of the emulation. If we are to reconstruct the algorithm, we have to match the frequencies of the glyphs or the glyph strings (including spaces as glyphs). For this purpose, I have made a start by using the “advanced search” function in Adobe Acrobat; it seems faster and more powerful than the equivalent functions in Word or Excel. I have used the V101 alphabet rather than EVA, since in an EVA transliteration (if I have understood it correctly), a search on, for example, “c” would pick up all the “c’s” in “ch”, “cth”, “ckh”, “cfh” etc.

        My first round of searching seemed to yield that in V101, the 19 most frequent glyphs are o, 9, a, 1, e, c, h, 8, . (space), y, k, 4, m, 2, s, C, 7, , (uncertain space), and n, together accounting for about 90% of the whole manuscript. (Maybe other researchers could confirm or correct this.)

        It then made sense to look for glyph pairs containing one or more of these frequent glyphs. This needs 19×19 = 361 searches. I have not yet completed this phase, but so far I have identified high frequencies (2,000 to 4,000 occurrences) of oe, oh, oy, ok, ae, ay, am, 1o, co, c8, c9, ha, 8a.

  6. Dear Professor Rugg, I returned to Microsoft Word for the search function. Applying the “Find and Replace” function to the V101 transliteration, and treating spaces as glyphs, yielded the following results:

    * The most frequent glyphs are . (space), o, 9, a, c, 1, e, 8, h, y, k, 4, m, 2, C, 7, s, , (uncertain space) and n. Their aggregate count is 182,416 which is 94% of the total glyph count.
    * These 19 glyphs seem to have very little interaction with the other ~46 glyphs. For each of the 19 glyphs, the following glyph (reading left to right) is, with 85% to 98% probability, one of the same 19 glyphs.
    * So if we wanted to reconstruct the algorithm, it would make sense to focus on these “big 19” glyphs and worry about the rest later.
    * Working with only these 19 glyphs, there are 43 glyph pairs (2-glyph strings) which occur at least 1,000 times. The five most frequent are:
    9.
    .o
    oh
    .1
    oe
    where in each case . means a space..
    * Again the glyph following any one of these glyph pairs is, with 82% to 98% probability, one of the “big 19”. So the “big 19” like to stick together.
    * There are at least 34 glyph triplets (3-glyph strings) which occur at least 1,000 times. The three most frequent are:
    .4o
    89.
    9.4
    am.
    oe.
    where in each case . means a space.

    I’m thinking that the 3-glyph strings could be the building blocks of the algorithm.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.