The quick and dirty approach to meta-analysis

By Gordon Rugg

In an ideal world, everyone would always do everything perfectly. However, it’s not an ideal world.

So what can you do when you’re trying to make sense of a problem where there’s conflicting evidence, and you don’t have time to work through all the relevant information?

One approach is simply to decide what your conclusion is going to be, and then to ignore any evidence that doesn’t fit. This is not terribly moral or advisable.

Another is to do a meta-analysis, to assess the quality of the evidence as a whole. This sounds impressive; it also sounds like hard work, which it is, if you do a full-scale proper meta-analysis. Most academic researchers therefore use two types of meta-analysis.

  • The first is the quick and dirty type, which normally gives you a pretty good idea of whether the topic is worth spending your time on.
  • The second is the proper type, which is time-consuming, and requires sophisticated knowledge of research methods, including statistics.

This article, as the title subtly implies, is about the quick and dirty approach. It’s a flawed, imperfect approach, but it’s a good starting point in a flawed, imperfect world.

If you’re a normal human being or an academic researcher, then you simply don’t have the time to read everything that might be relevant to the topic that you’re interested in. However, you will want to get a reasonably accurate overview of the quality of evidence in relation to the key issues that interest you.

Most academic researchers use a quick and dirty approach that involves matching the claims being made against the credibility of the venue where those claims are being made. What does that mean? Here’s an illustration.

credibilityv2

First, credibility.

The more credible the place where a claim appears, the greater the chance that there really is something going on that’s worth looking at. (I’ll look at the underlying principles later in this article; the credibility rating isn’t just about academic snobbery.)

What you can now do is to plot the credibility of the venue against the strength of the effect being claimed (e.g. a claim that treatment X significantly improves problem Y). Here’s what you often see.

credibility v strengthv3

What often happens is that the low-credibility sources say there is a strong effect (e.g. that treatment X helps a lot with problem Y), but the medium-credibility sources say a weaker effect (e.g. that treatment X helps a fair amount in most cases of problem Y), and the high-credibility sources say that there’s only very weak effect, or no evidence at all, of an effect.

Yes, it’s an imperfect rule of thumb, and yes, the high-credibility sources can make mistakes, but as a quick way of assessing what’s going on, it’s a pretty good starting point. From here, you can decide whether you want to go on to a focused study, or to a full-scale proper meta-analysis that gets into the statistical and methodological issues of the articles involved.

Where does credibility come from?

In brief: quality control.

Quality control in publications takes two main forms:

  • quality control for form: e.g. spelling, punctuation, grammar, etc.
  • quality control for content: e.g. the methods and for the evidence.

All print publications perform quality control for spelling, punctuation, grammar, etc. So do many web sites. This type of quality control isn’t just about pedantry and appearance. Typos in particular can have huge practical implications. The classic example is when someone puts the decimal point in the wrong place, and offers a product for sale at a tenth or a hundredth of the intended price. Some companies have lost very large amounts of money through that mistake. Similarly, a mis-placed comma can sometimes have major legal implications.

The more prestigious venues also do quality control for what’s being claimed in an article. The prestige pecking order in academic publications is based largely on how rigorous that quality control is. The more rigorous the quality control, the greater the prestige.

It’s important to note that this latter type of quality control is about the methods and the evidence being used, not about the conclusions. In academia, your reputation is actually enhanced if you find something solid that contradicts conventional wisdom in your field. You don’t get a Nobel prize for agreeing with everyone else in your field; you get it for new findings and new insights that change your field forever.

This is a major difference between most published academic research and most commercial research, such as think-tank research.

If you’re an academic researcher, you can research pretty much whatever you like, and your funding and career prospects won’t be affected by the conclusions you reach. Instead, they’ll be affected by the quality of the research that you used to reach those conclusions. The people who fund your research don’t usually care what your conclusions are, as long as they’re solidly based.

This is an important point, because it helps academic researchers to stay free of vested interests that might pressure them towards claiming particular conclusions, even though those conclusions are wrong. This happened on a disastrous scale under Stalin, where agricultural researchers were pressured to agree with the conclusions of Stalin’s favoured researcher Trofim Lysenko. This did not end well.

https://en.wikipedia.org/wiki/Lysenkoism

This doesn’t mean that academic researchers are impartial, or that there is such a thing as pure objectivity. However, it does mean that academic researchers are under no significant pressure to come to any particular conclusion about a research question. The situation is probably different, however, for researchers who are working for a political think-tank, or for a company that produces a controversial product…

Advanced issues

This article until here has been about a quick and dirty approach to getting an overview of a research topic. If you’re going to do a proper, thorough process of critically evaluating the evidence about a topic, then that’s a whole different beast.

A proper meta-analysis requires a sophisticated understanding of statistics, of research design, of research methods, and of methods of conducting meta-analysis. You need those to perform a proper critical assessment of the quality of the work being reported in each article. Often, articles have fatal flaws that are only identifiable via some very sophisticated understanding of how the work in them should have been carried out.

This level of knowledge typically requires years of postdoctoral-level work to acquire, and would take several books to explain properly. I’m not going to even try to describe the full process.

This may sound depressing, if you aren’t in a position to acquire the level of experience and skill needed to do full-on meta-analysis.

However, there are some other useful concepts that can help you better understand what’s going on within a body of research, and that don’t require huge investment of time and effort. They can be very useful for identifying potential problems within an article, or a research field.

In no particular order, these are:

The Courtier’s Reply

The Courtier’s Reply says, in essence, “You need to waste several months or years of your life reading this selection of pseudo-scholarly garbage before you can really understand what we’re actually saying”.

It’s much favoured by cranks, though other groups also use it on occasion. It’s a quick and easy way for them to smear opponents with claims that the opponents haven’t bothered to find out what they’re arguing about.

If you’re wondering how this relates to the academic line of “There’s a literature on that” then you’re right to wonder. There’s a judgment call about where real scholarship ends and where pseudo-science and charlatanism begin. I don’t plan to stick my head into that particular hornet’s nest just yet, so I’ll move on swiftly to the next topic.

Publication bias toward positive findings

Most publications favour articles that claim to have found something new, as opposed to articles that claim to have looked for something and not found it (for instance, a claim that treatment X improves recovery rates for condition Y, as opposed to a claim that treatment X had no effect on recovery rates for condition Y). This is a problem because it biases the literature towards evidence for a given claim, and away from evidence against it.

This problem is well-recognised in fields relating to health and medicine. Researchers in such fields usually take it explicitly into account when assessing evidence.

This is one reason why it’s not a good idea to try assessing the evidence for and against something by simply counting the published articles that argue in favour of it and against it; there’s an inbuilt tendency for the articles in favour of a claim to be over-represented.

Citation rings

Sometimes a number of researchers will cite each other’s work as often as possible, thereby inflating the visibility of everyone within that ring of mutual publicity, regardless of the quality of their work. This can skew the relative visibility of different approaches within a field – citation rings tend to look as if their approach is more influential than is really the case. Journal editors, and journal reviewers, know about this practice and generally take a dim view of it. However, the practice isn’t likely to end in the foreseeable future, so you need to keep an eye open for it.

Systematic Literature Reviews

Done properly, a Systematic Literature Review (SLR) can give an invaluable overview of research on a given topic.

However, doing it right involves an advanced understanding of the usual suspects – statistics, research design, research methods, etc. If you don’t understand these at a sophisticated level, there’s a high risk that you’ll be misled by impressive-looking garbage in a paper produced by a snake oil merchant or by someone who’s better at presentation than research design.

It’s horribly easy for well-intentioned novices to produce something which mimics the surface appearance of an SLR, but which contains fatal mistakes in the judgment calls at the heart of the SLR approach. These mistakes aren’t usually visible in the final write-up. Badly conducted SLRs can look very systematic and objective, but they can also be worse than useless, since the mistakes in their findings can mislead researchers for a long time.

I’ll blog about this in a separate article; it’s another of those topics where explaining the mistakes takes much longer than it takes the perpetrator to commit them.

Discovery curves

I’ll close with an encouraging neat idea. In some disciplines, notably zoology, discovery curves are used to plot the number of new discoveries made over time. For instance, an ornithologist might plot the number of new species of bird that were discovered each year since records began. Usually the curve starts off as a steep upward climb, and then levels off as a plateau when there aren’t many new species left to find.

What’s neat about this approach is that if you use it correctly, it can give you a surprisingly accurate idea of how many new discoveries are waiting to be made. (“Correctly” involves things like making allowance for whether someone has just discovered a new island, or whether a new method for defining “species” was introduced during one the years involved.)

There are some entertaining examples of this on the excellent Tetrapod Zoology blog – the sea monsters article is a classic.

http://blogs.scientificamerican.com/tetrapod-zoology/

http://scienceblogs.com/tetrapodzoology/2009/03/24/statistics-seals-sea-monsters/

On which inspiring note, I’ll end.

Notes

You’re welcome to use Hyde & Rugg copyleft images for any non-commercial purpose, including lectures, provided that you state that they’re copyleft Hyde & Rugg.

There’s more about the theory behind this article in my latest book:

Blind Spot, by Gordon Rugg with Joseph D’Agnese

http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903

 

 

Advertisement

5 thoughts on “The quick and dirty approach to meta-analysis

  1. Pingback: Literature reviews | hyde and rugg

  2. Pingback: The Knowledge Modelling Book | hyde and rugg

  3. Pingback: Critically reviewing the literature, the quick and dirty way | hyde and rugg

  4. Pingback: Getting an overview of the literature via review articles | hyde and rugg

  5. Pingback: Academic writing versus magazine writing | hyde and rugg

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.