Misperceptions of failure

By Gordon Rugg

One of the most useful pieces of advice I every encountered in relation to life planning is that you should aim to be rejected in about 75% of your job applications.

It’s one of those profoundly counter-intuitive concepts that make you re-think a lot of things that you had previously taken for granted, and that give you a much more powerful (and also much more forgiving) set of insights as a result.

Why is it good advice, and what are the implications?

The core concept behind the “75% rejection” advice is that if you’re being offered 100% of the jobs that you’re applying for, then you’re aiming too low.

Once you start thinking this way, then a lot of implications rapidly become obvious. One obvious implication is that you can view rejection rates as calibration, rather than as assessment. That makes a huge difference to your morale, which is a big issue when you’re job hunting, and when the world appears bleak and hopeless. It also helps you to view yourself as having some control over the process, rather than being powerless. That’s another huge advantage.

In case you’re wondering, the figure of 75% isn’t fixed in stone, but it’s a useful place to start. It makes the point that you, and everyone else, can expect the majority of sensible applications to end in rejection. It also makes the point that this is a normal part of reaching your goal, so you shouldn’t view it as a reflection on your self-worth.

The same principle applies in a lot of other fields. Even for the best researchers in the world, for instance, most of their funding applications will be rejected. The best journals in the world routinely reject 90% or more of the papers submitted to them, even though those papers will routinely be from the best researchers in the world.

Figures like these will probably start you thinking about just what “failure” actually is. It’s an interesting question. The obvious way of viewing failure is that you start off with a goal, and that if you don’t achieve it, then that is failure. The reality, though, is more complex.

The obvious view, that failure means not achieving your goal, contains the implicit assumption that your goal is achievable in the first place. In many cases, that assumption is wrong. Sometimes there is no way of knowing whether or not your goal is achievable until you get deep into the attempt. In other cases, it’s already known that the goal isn’t achievable, but someone decides to try anyway, often with substantial knock-on effects. In yet other cases, there’s an implicit assumption that success rates ought to be 100%, and that anything else is itself a failure.

Here are some examples.

Could they have known in advance?

The first human landing on the moon arose from Kennedy’s goal of getting a human to the moon, and safely back, by the end of the decade. It worked. It was widely cited in management books thereafter as an exemplary case of clearly stated objectives leading to a successful project.

What’s less often cited is Nixon’s attempt to do the same for cancer research. That had equally clear objectives, but it failed to achieve them. With hindsight, the problem was much more complex and difficult than Nixon’s advisors had believed. However, there’s a good case for arguing that the attempt to achieve those objectives led to much more profound insights into the true nature of the problem, which were very useful to subsequent researchers.

Was it already known to be unachievable?

Another example, about which I’ve blogged previously, is the assumption that it’s possible to teach everyone to read. The evidence suggests pretty strongly that this assumption is simply wrong. The statistics for literacy rates across time, across cultures, and across teaching methods, strongly suggest that there’s a significant percentage (somewhere in the area of 5% to 10%, depending on definitions etc) of people who for whatever reasons will not become functionally literate.

This has far-reaching implications for education theory, for education practice, and for provision of information to the public.

Similarly, in a wide range of fields, there is an implicit or explicit goal of completely eradicating some risk. If you look into the literatures on risk and on disasters, you soon realise that this goal is based on a profound misunderstanding of how things actually go wrong. What’s much more realistic and productive is to frame the issue differently; for instance, in terms of getting the number of cases below some realistic figure.

Should we assume that failure is always bad?

The problems above would be bad enough in isolation. What often happens, though, is that they become footballs in policy and legislation arguments, leading to an infinite cycle of churn. What does that mean? A common pattern is as follows.

  • There is a problem
  • Method A is introduced, and fixes 99% of cases
  • One of the remaining 1% of failures is given a high media profile
  • Method B is introduced to prevent similar failures from happening again
  • Method B fixes 99% of cases
  • One of the remaining 1% of failures is given a high media profile
  • Method C is introduced to prevent similar failures from happening again
  • Rinse, lather and repeat

The result is constant change in procedures and laws, usually at high financial cost, and usually bringing huge stress to the people affected, without actually changing the overall picture. Why does this happen?

A major reason is that no manager or politician in their senses will say, “Okay, so there was one disastrous case, but we’re not going to change anything”. If they did that, they’d be accused of not caring, and their career prospects would nosedive. There’s a huge incentive for them to be seen to be actively doing something about the problem, even if they know, and everyone who understands the topic knows, that the new system will simply change the problem rather than removing it. (This isn’t an argument for never changing systems; on the contrary, most systems do have room for improvement. What I’m saying here is that knee-jerk political reactions to emotive cases aren’t the best way of assessing and fixing a problem.)

Another issue is people’s perception of justice. There’s a strong temptation to want every injustice to be set right, whether the case involves perceived misuse of the benefits system or a child’s death. That’s understandable, but it shouldn’t be the sole basis for laws and for systems design in areas such as healthcare or child protection. It’s extremely unlikely that anyone or any method can prevent all bad things from happening all of the time; trying to achieve that impossible goal may feel good to the person making the effort, but it usually makes things actually worse for the people on the ground who have to cope with the constant changes, all the while knowing that the changes won’t bring any real overall improvement anyway.

A further issue is the assumption that not achieving a goal is always a bad thing. This is a familiar issue to the academic world in relation to PhDs, where there’s been increasing pressure for years from the funding bodies to aim for a 100% pass rate on PhDs. Again, this is understandable, but it makes no allowance for cases where the student discovers that a PhD is not what they really want to do, so they stop early, rather than slogging on for years on something that isn’t for them. Often, those students report afterwards that their time on the PhD was a hugely positive experience, even though they’re still sure that abandoning the PhD was the correct decision.

The 100% pass rate assumption also makes no allowance for “risky” PhDs, where you take on a student who’s extreme in some way. Sometimes, those are the best students, and their work changes the world precisely because it’s gained huge new insights from taking a very unusual approach. Sometimes, conversely, those are the worst students, and you go through some very character-forming experiences before parting company with them. Most traditional academics take the view that every department should have one or two risky PhDs at any given point, because those are the students most likely to push back the frontiers.

Closing thoughts

So, “failure” isn’t always a clearly defined bad thing. Also, the search for a 100% success rate is often a wild goose chase that comes at significant cost to the people that it’s intended to help. Usually, setting realistic goals is better for everyone concerned, rather than being a sell-out to the forces of darkness.

This has been a quick skim over a deep, complex topic. I’ve blogged before about related issues; there are some links below to relevant articles and concepts. I hope you’ve found this useful.

The disaster literature: A good place to start is Perrow’s concept of the “normal accident”. This is about how many accidents are pretty much inevitable as a result of how systems work. Many accidents happen when two or more situations co-occur; each situation may be harmless or even beneficial on its own, but the combination is disastrous.

http://www.amazon.com/Normal-Accidents-Living-High-Risk-Technologies/dp/0691004129

Sub-system optimisation: There’s a widespread, but wrong, belief that if you improve all the parts of a system, then the system as a whole will be improved. In reality, you can often make the system significantly worse by improving the individual parts without taking account of the system as a whole.

https://hydeandrugg.wordpress.com/2013/05/07/subsystem-optimisation-and-system-optimisation/

Systems theory and emergent properties: You can’t reliably extrapolate from a simple system to a more complex one. The reason is that complex systems behave in ways that simpler systems don’t. So, for example, a national economy behaves in different ways from a commercial company’s budget, and a commercial company’s budget behaves differently from a household budget. This is one reason that you need to be very wary when someone tries to improve a complex system in one area (e.g. education policy) based on their previous experience in a simpler area (e.g. teaching at a school).

https://hydeandrugg.wordpress.com/2014/08/15/systems-theory/

Notes and links:

There’s more about the underlying theory in my latest book, Blind Spot, by Gordon Rugg with Joseph D’Agnese

http://www.amazon.co.uk/Blind-Spot-Gordon-Rugg/dp/0062097903

Overviews of the articles on this blog:

https://hydeandrugg.wordpress.com/2014/09/19/150-posts-and-counting/

https://hydeandrugg.wordpress.com/2014/04/28/one-hundred-hyde-rugg-articles-and-the-verifier-framework/

 

 

 

 

 

 

 

 

 

 

Advertisement

3 thoughts on “Misperceptions of failure

  1. Pingback: The Knowledge Modelling Book | hyde and rugg

  2. Pingback: Life at Uni: Will the world end if I fail my exams? | hyde and rugg

  3. Pingback: Will the world end if I don’t get a job soon? | hyde and rugg

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.