[Epistemic status: Relatively strong. There are numerous studies showing that predictions often become miscalibrated. Overconfidence in itself appears fairly robust, appearing in different situations. The actual mechanism behind the planning fallacy is less certain, though there is evidence for the inside/outside view model. The debiasing techniques are supported, but more data on their effectiveness could be good.]
Humans are often quite overconfident, and perhaps for good reason. Back on the savanna and even some places today, bluffing can be an effective strategy for winning at life. Overconfidence can scare down enemies and avoid direct conflict.
When it comes to making plans, however, overconfidence can really screw us over. You can convince everyone (including yourself) that you’ll finish that report in three days, but it might still really take you a week. Overconfidence can’t intimidate advancing deadlines.
I’m talking, of course, about the planning fallacy, our tendency to make unrealistic predictions and plans that just don’t work out.
Being a true pessimist ain’t easy.
Students are a prime example of victims to the planning fallacy:
First, students were asked to predict when they were 99% sure they’d finish a project. When the researchers followed up with them later, though, only about 45%, less than half of the students, had actually finished by their own predicted times [Buehler, Griffin, Ross, 1995].
Even more striking, students working on their psychology honors theses were asked to predict when they’d finish, “assuming everything went as poor as it possibly could.” Yet, only about 30% of students finished by their own worst-case estimate [Buehler, Griffin, Ross, 1995].
Similar overconfidence was also found in Japanese and Canadian cultures, giving evidence that this is a human (and not US-culture-based) phenomenon. Students continued to make optimistic predictions, even when they knew the task had taken them longer last time [Buehler and Griffin, 2003, Buehler et al., 2003].
As I student myself, though, I don’t mean to just pick on ourselves.
The planning fallacy affects projects across all sectors.
An overview of public transportation projects found that most of them were, on average, 20–45% above the estimated cost. In fact, research has shown that these poor predictions haven’t improved at all in the past 30 years [Flyvbjerg 2006].
When it comes to planning, we suffer from a major disparity between our expectations and reality. This article outlines the research behind why we screw up our predictions and gives three suggested techniques to suck less at planning.
The Mechanism:
So what’s going on in our heads when we make these predictions for planning?
On one level, we just don’t expect things to go wrong. Studies have found that we’re biased towards not looking at pessimistic scenarios [Newby-Clark et al., 2000]. We often just assume the best-case scenario when making plans.
Part of the reason may also be due to a memory bias. It seems that we might underestimate how long things take us, even in our memory [Roy, Christenfeld, and McKenzie 2005].
But by far the dominant theory in the field is the idea of an inside view and an outside view [Kahneman and Lovallo 1993]. The inside view is the information you have about your specific project (inside your head). The outside view is what someone else looking at your project (outside of the situation) might say.
Obviously you want to take the Outside View.
We seem to use inside view thinking when we make plans, and this leads to our optimistic predictions. Instead of thinking about all the things that might go wrong, we’re focused on how we can help our project go right.
Still, it’s the outside view that can give us better predictions. And it turns out we don’t even need to do any heavy-lifting in statistics to get better predictions. Just asking other people (from the outside) to predict your own performance, or even just walking through your task from a third-person point of view can improve your predictions [Buehler et al., 2010].
Basically, the difference in our predictions seems to depend on whether we’re looking at the problem in our heads (a first-person view) or outside our heads (a third-person view). Whether we’re the “actor” or the “observer” in our minds seems to be a key factor in our planning [Pronin and Ross 2006].
Debiasing Techniques:
I’ll be covering three ways to improve predictions: Murphyjitsu, Reference ClassForecasting (RCF), and Back-planning. In actuality, they’re all pretty much the same thing; all three techniques focus, on some level, on trying to get more of an outside view. So feel free to choose the one you think works best for you (or do all three).
For each technique, I’ll give an overview and cover the steps first and then end with the research that supports it. They might seem deceptively obvious, but do try to keep in mind that obvious advice can still be helpful!
(Remembering to breathe, for example, is obvious, but you should still do it anyway. If you don’t want to suffocate.)
Murphyjitsu:
“Avoid Obvious Failures”
Almost as good as giving procrastination an ass-kicking.
The name Murphyjitsu comes from the infamous Murphy’s Law: “Anything that can go wrong, will go wrong.” The technique itself is from the Center for Applied Rationality (CFAR), and is designed for “bulletproofing your strategies and plans”.
Here are the basic steps:
Figure out your goal. This is the thing you want to make plans to do.
Write down which specific things you need to get done to make the thing happen. (Make a list.)
Now imagine it’s one week (or month) later, and yet you somehow didn’t manage to get started on your goal. (The visualization part here is important.) Are you surprised?
Why? (What went wrong that got in your way?)
Now imagine you take steps to remove the obstacle from Step 4.
Return to Step 3. Are you still surprised that you’d fail? If so, your plan is probably good enough. (Don’t fool yourself!)
If failure still seems likely, go through Steps 3–6 a few more times until you “problem proof” your plan.
Murphyjitsu based off a strategy called a “premortem” or “prospective hindsight”, which basically means imagining the project has already failed and “looking backwards” to see what went wrong [Klein 2007].
It turns out that putting ourselves in the future and looking back can help identify more risks, or see where things can go wrong. Prospective hindsight has been shown to increase our predictive power so we can make adjustments to our plans — before they fail [Mitchell et al., 1989, Veinott et al., 2010].
This seems to work well, even if we’re only using our intuitions. While that might seem a little weird at first (“aren’t our intuitions pretty arbitrary?”), research has shown that our intuitions can be a good source of information in situations where experience is helpful [Klein 1999; Kahneman 2011]*.
While a premortem is usually done on an organizational level, Murphyjitsu works for individuals. Still, it’s a useful way to “failure-proof” your plans before you start them that taps into the same internal mechanisms.
Here’s what Murphyjitsu looks like in action:
“First, let’s say I decide to exercise every day. That’ll be my goal (Step 1). But I should also be more specific than that, so it’s easier to tell what “exercising” means. So I decide that I want to go running on odd days for 30 minutes and do strength training on even days for 20 minutes. And I want to do them in the evenings (Step 2).
Now, let’s imagine that it’s now one week later, and I didn’t go exercising at all! What went wrong? (Step 3) The first thing that comes to mind is that I forgot to remind myself, and it just slipped out of my mind (Step 4). Well, what if I set some phone / email reminders? Is that good enough? (Step 5)
Once again, let’s imagine it’s one week later and I made a reminder. But let’s say I still didn’t got exercising. How surprising is this? (Back to Step 3) Hmm, I can see myself getting sore and/or putting other priorities before it…(Step 4). So maybe I’ll also set aside the same time every day, so I can’t easily weasel out (Step 5).
How do I feel now? (Back to Step 3) Well, if once again I imagine it’s one week later and I once again failed, I’d be pretty surprised. My plan has two levels of fail-safes and I do want to do exercise anyway. Looks like it’s good! (Done)
Reference Class Forecasting:
“Get Accurate Estimates”
Predicting the future…using the past!
Reference class forecasting (RCF)is all about using the outside view. Our inside views tend to be very optimistic: We will see all the ways that things can go right, but none of the ways things can go wrong. By looking at past history — other people who have tried the same or similar thing as us — we can get a better idea of how long things will really take.
Here are the basic steps:
Figure out what you want to do.
See your records how long it took you last time 3.
That’s your new prediction.
If you don’t have past information, look for about how long it takes, on average, to do our thing. (This usually looks like Googling “average time to do X”.)**
That’s your new prediction!
Technically, the actual process for reference class forecasting works a little differently. It involves a statistical distribution and some additional calculations, but for most everyday purposes, the above algorithm should work well enough.
In both cases, we’re trying to take an outside view, which we know improves our estimates [Buehler et al., 1994].
When you Google the average time or look at your own data, you’re forming a “reference class”, a group of related actions that can give you info about how long similar projects tend to take. Hence, the name “reference class forecasting”.
Basically, RCF works by looking only at results. This means that we can avoid any potential biases that might have cropped up if we were to think it through. We’re shortcutting right to the data. The rest of it is basic statistics; most people are close to average. So if we have an idea of what the average looks like, we can be sure we’ll be pretty close to average as well [Flyvbjerg 2006; Flyvbjerg 2008].
The main difference in our above algorithm from the standard one is that this one focuses on your own experiences, so the estimate you get tends to be more accurate than an average we’d get from an entire population.
For example, if it usually takes me about 3 hours to finish homework (I use Toggl to track my time), then I’ll predict that it will take me 3 hours today, too.
It’s obvious that RCF is incredibly simple. It literally just tells you that how long something will take you this time will be very close to how long it took you last time. But that doesn’t mean it’s ineffective! Often, the past is a good benchmark of future performance, and it’s far better than any naive prediction your brain might spit out.
RCF + Murphyjitsu Example:
For me, I’ve found that using a mixture of Reference Class Forecasting and Murphyjitsu to be helpful for reducing overconfidence in my plans.
When starting projects, I will often ask myself, “What were the reasons that I failed last time?” I then make a list of the first three or four “failure-modes” that I can recall. I now make plans to preemptively avoid those past errors.
(This can also be helpful in reverse — asking yourself, “How did I solve a similar difficult problem last time?” when facing a hard problem.)
Here’s an example:
“Say I’m writing a long post (like this one) and I want to know how what might go wrong. I’ve done several of these sorts of primers before, so I have a “reference class” of data to draw from. So what were the major reasons I fell behind for those posts?
<Cue thinking>
Hmm, it looks like I would either forget about the project, get distracted, or lose motivation. Sometimes I’d want to do something else instead, or I wouldn’t be very focused.
Okay, great. Now what are some ways that I might be able to “patch” those problems?
Well, I can definitely start by making a priority list of my action items. So I know which things I want to finish first. I can also do short 5-minute planning sessions to make sure I’m actually writing. And I can do some more introspection to try and see what’s up with my motivation.
Back-planning:
“Calibrate Your Intuitions with Reality”
Back-planning involves, as you might expect, planning from the end. Instead of thinking about where we start and how to move forward, we imagine we’re already at our goal and go backwards.
Time-travelling inside your internal universe.
Here are the steps:
Figure out the task you want to get done.
Imagine you’re at the end of your task.
Now move backwards, step-by-step. What is the step right before you finish?
Repeat Step 3 until you get to where you are now.
Write down how long you think the task will now take you.
You now have a detailed plan as well as better prediction!
The experimental evidence for back-planning basically suggests that people will predict longer times to start and finish projects.
There are a few interesting hypotheses about why back-planning seems to improve predictions. The general gist of these theories is that back-planning is a weird, counterintuitive way to think about things, which means it disrupts a lot of mental processes that can lead to overconfidence [Wiese et al., 2012].
This means that back-planning can make it harder to fall into the groove of the easy “best-case” planning we default to. Instead, we need to actually look at where things might go wrong. Which is, of course, what we want.
In my own experience, I’ve found that going through a quick back-planning session can help my intuitions “warm up” to my prediction more. As in, I’ll get an estimation from RCF, but it still feels “off”. Walking through the plan through back-planning can help all the parts of me understand that it really will probably take longer.
Here’s the back-planning example:
“Right now, I want to host a talk at my school. I know that’s the end goal (Step 1). So the end goal is me actually finishing the talk and taking questions (Step 2). What happens right before that? (Step 3). Well, people would need to actually be in the room. And I would have needed a room.
Is that all? (Step 3). Also, for people to show up, I would have needed publicity. Probably also something on social media. I’d need to publicize at least a week in advance, or else it won’t be common knowledge.
And what about the actual talk? I would have needed slides, maybe memorize my talk. Also, I’d need to figure out what my talk is actually going to be on.
Huh, thinking it through like this, I’d need something like 3 weeks to get it done. One week for the actual slides, one week for publicity (at least), and one week for everything else that might go wrong.
That feels more ‘right’ than my initial estimate of ‘I can do this by next week.’”
Experimental Ideas:
Murphyjitsu, Reference Class Forecasting, and Back-planning are the three debiasing techniques that I’m fairly confident work well. This section is far more anecdotal. They’re ideas that I think are useful and interesting, but I don’t have much formal backing for them.
Decouple Predictions From Wishes:
In my own experience, I often find it hard to separate when I want to finish a task versus when I actually think I will finish a task. This is a simple distinction to keep in mind when making predictions, and I think it can help decrease optimism. The most important number, after all, is when I actually think I will finish—it’s what’ll most likely actually happen.
There’s some evidence suggesting that “wishful thinking” could actually be responsible for some poor estimates but it’s far from definitive [Buehler et al., 1997, Krizan and Windschitl].
Incentivize Correct Predictions:
Lately, I’ve been using a 4-column chart for my work. I write down the task in Column 1 and how long I think it will take me in Column 2. Then I go and do the task. After I’m done, I write down how long it actually took me in Column 3. Column 4 is the absolute value of Column 2 minus Column 3, or my “calibration score”.
The idea is to minimize my score every day. It’s simple and it’s helped me get a better sense for how long things really take.
Plan For Failure:
In my schedules, I specifically write in “distraction time”. If you aren’t doing this, you may want to consider doing this. Most of us (me included) have wandering attentions, and I know I’ll lost at least some time to silly things every day.
Double Your Estimate:
I get it. The three debiasing techniques I outlined above can sometimes take too long. In a pinch, you can probably approximate good predictions by just doubling your naive prediction.
Most people tend to be less than 2X overconfident, but I think (pessimistically) sticking to doubling is probably still better than something like 1.5X.
Working in Groups:
Obviously because groups are made of individuals, we’d expect them to be susceptible to the same overconfidence biases I covered earlier. Though some research has shown that groups are less susceptible to bias, more studies have shown that group predictions can be far more optimistic than individual predictions [Wright and Wells, Buehler et al., 2010]. “Groupthink” is term used to describe the observed failings of decision making in groups [Janis].
Groupthink (and hopefully also overconfidence), can be countered by either assigning a “Devil’s Advocate” or engaging in “dialectical inquiry” [Lunenburg 2012]:
We give out more than cookies over here
A Devil’s Advocate is a person who is actively trying to find fault with the group’s plans, looking for holes in reasoning or other objections. It’s suggested that the role rotates, and it’s associated with other positives like improved communication skills.
A dialectical inquiry is where multiple teams try to create the best plan, and then present them. Discussion then happens, and then the group selects the best parts of each plan . It’s a little like building something awesome out of lots of pieces, like a giant robot.
This is absolutely how dialectical inquiry works in practice.
For both strategies, research has shown that they lead to “higher-quality recommendations and assumptions” (compared to not doing them), although it can also reduce group satisfaction and acceptance of the final decision [Schweiger et al. 1986].
(Pretty obvious though; who’d want to keep chatting with someone hell-bent on poking holes in your plan?)
Conclusion:
If you’re interested in learning (even) more about the planning fallacy, I’d highly recommend the paper The Planning Fallacy: Cognitive, Motivational, and Social Originsby Roger Buehler, Dale Griffin, and Johanna Peetz. Most of the material in this guide here is was taken from their paper. Do go check it out! It’s free!
Remember that everyone is overconfident (you and me included!), and that failing to plan is the norm. There are scary unknown unknowns out there that we just don’t know about!
Good luck and happy planning!
Footnotes:
* Just don’t go and start buying lottery tickets with your gut. We’re talking about fairly “normal” things like catching a ball, where your intuitions give you accurate predictions about where the ball will land. (Instead of, say, calculating the actual projectile motion equation in your head.)
** In a pinch, you can just use your memory, but studies have shown that our memory tends to be biased too. So as often as possible, try to use actual measurements and numbers from past experience.
Works Cited:
Buehler, Roger, Dale Griffin, and Johanna Peetz. “The Planning Fallacy: Cognitive,
Motivational, and Social Origins.” Advances in Experimental Social Psychology 43 (2010): 1-62. Social Science Research Network.
Buehler, Roger, Dale Griffin, and Michael Ross. “Exploring the Planning Fallacy: Why People
Underestimate their Task Completion Times.” Journal of Personality and Social Psychology 67.3 (1994): 366.
Buehler, Roger, Dale Griffin, and Heather MacDonald. “The Role of Motivated Reasoning in
Optimistic Time Predictions.” Personality and Social Psychology Bulletin 23.3 (1997): 238-247.
Buehler, Roger, Dale Griffin, and Michael Ross. “It’s About Time: Optimistic Predictions in
Work and Love.” European Review of Social Psychology Vol. 6, (1995): 1–32
Buehler, Roger, et al. “Perspectives on Prediction: Does Third-Person Imagery Improve Task
Completion Estimates?.” Organizational Behavior and Human Decision Processes 117.1 (2012): 138-149.
Buehler, Roger, Dale Griffin, and Michael Ross. “Inside the Planning Fallacy: The Causes and
Consequences of Optimistic Time Predictions.” Heuristics and Biases: The Psychology of Intuitive Judgment (2002): 250-270.
Buehler, R., & Griffin, D. (2003). Planning, Personality, and Prediction: The Role of Future
Focus in Optimistic Time Predictions. Organizational Behavior and Human Decision Processes, 92, 80–90
Flyvbjerg, Bent. “From Nobel Prize to Project Management: Getting Risks Right.” Project
Management Journal 37.3 (2006): 5-15. Social Science Research Network.
Flyvbjerg, Bent. “Curbing Optimism Bias and Strategic Misrepresentation in Planning:
Reference Class Forecasting in Practice.” European Planning Studies 16.1 (2008): 3-21.
Janis, Irving Lester. “Groupthink: Psychological Studies of Policy Decisions and Fiascoes.”
(1982).
Johnson, Dominic DP, and James H. Fowler. “The Evolution of Overconfidence.” Nature
477.7364 (2011): 317-320.
Kahneman, Daniel. Thinking, Fast and Slow. Macmillan, 2011.
Kahneman, Daniel, and Dan Lovallo. “Timid Choices and Bold Forecasts: A Cognitive
Perspective on Risk Taking.” Management Science 39.1 (1993): 17-31.
Klein, Gary. Sources of power: How People Make Decisions. MIT press, 1999.
Klein, Gary. “Performing a Project Premortem.” Harvard Business Review 85.9 (2007): 18-19.
Krizan, Zlatan, and Paul D. Windschitl. “Wishful Thinking About the Future: Does Desire
Impact Optimism?” Social and Personality Psychology Compass 3.3 (2009): 227-243.
Lunenburg, F. “Devil’s Advocacy and Dialectical Inquiry: Antidotes to Groupthink.”
International Journal of Scholarly Academic Intellectual Diversity 14 (2012): 1-9.
Mitchell, Deborah J., J. Edward Russo, and Nancy Pennington. “Back to the Future: Temporal
Perspective in the Explanation of Events.” Journal of Behavioral Decision Making 2.1 (1989): 25-38.
Newby-Clark, Ian R., et al. “People focus on Optimistic Scenarios and Disregard Pessimistic
Scenarios While Predicting Task Completion Times.” Journal of Experimental Psychology: Applied 6.3 (2000): 171.
Pronin, Emily, and Lee Ross. “Temporal Differences in Trait Self-Ascription: When the Self is
Seen as an Other.” Journal of Personality and Social Psychology 90.2 (2006): 197.
Roy, Michael M., Nicholas JS Christenfeld, and Craig RM McKenzie. “Underestimating the
Duration of Future Events: Memory Incorrectly Used or Memory Bias?.” Psychological Bulletin 131.5 (2005): 738.
Schweiger, David M., William R. Sandberg, and James W. Ragan. “Group Approaches for
Improving Strategic Decision Making: A Comparative Analysis of Dialectical Inquiry,
Devil’s Advocacy, and Consensus.” Academy of Management Journal 29.1 (1986): 51-71.
Veinott, Beth. “Klein, and Sterling Wiggins,“Evaluating the Effectiveness of the Premortem
Technique on Plan Confidence,”.” Proceedings of the 7th International ISCRAM Conference (May, 2010).
Wiese, Jessica, Roger Buehler, and Dale Griffin. “Backward Planning: Effects of Planning
Direction on Predictions of Task Completion Time.” Judgment and Decision Making 11.2
(2016): 147.
Wright, Edward F., and Gary L. Wells. “Does Group Discussion Attenuate the Dispositional
Bias?.” Journal of Applied Social Psychology 15.6 (1985): 531-546.
Planning 101: Debiasing and Research
Planning 101: Techniques and Research
<Cross-posed from my blog>
[Epistemic status: Relatively strong. There are numerous studies showing that predictions often become miscalibrated. Overconfidence in itself appears fairly robust, appearing in different situations. The actual mechanism behind the planning fallacy is less certain, though there is evidence for the inside/outside view model. The debiasing techniques are supported, but more data on their effectiveness could be good.]
Humans are often quite overconfident, and perhaps for good reason. Back on the savanna and even some places today, bluffing can be an effective strategy for winning at life. Overconfidence can scare down enemies and avoid direct conflict.
When it comes to making plans, however, overconfidence can really screw us over. You can convince everyone (including yourself) that you’ll finish that report in three days, but it might still really take you a week. Overconfidence can’t intimidate advancing deadlines.
I’m talking, of course, about the planning fallacy, our tendency to make unrealistic predictions and plans that just don’t work out.
Students are a prime example of victims to the planning fallacy:
First, students were asked to predict when they were 99% sure they’d finish a project. When the researchers followed up with them later, though, only about 45%, less than half of the students, had actually finished by their own predicted times [Buehler, Griffin, Ross, 1995].
Even more striking, students working on their psychology honors theses were asked to predict when they’d finish, “assuming everything went as poor as it possibly could.” Yet, only about 30% of students finished by their own worst-case estimate [Buehler, Griffin, Ross, 1995].
Similar overconfidence was also found in Japanese and Canadian cultures, giving evidence that this is a human (and not US-culture-based) phenomenon. Students continued to make optimistic predictions, even when they knew the task had taken them longer last time [Buehler and Griffin, 2003, Buehler et al., 2003].
As I student myself, though, I don’t mean to just pick on ourselves.
The planning fallacy affects projects across all sectors.
An overview of public transportation projects found that most of them were, on average, 20–45% above the estimated cost. In fact, research has shown that these poor predictions haven’t improved at all in the past 30 years [Flyvbjerg 2006].
And there’s no shortage of anecdotes, from the Scottish Parliament Building, which cost 10 times more than expected, or the Denver International Airport, which took over a year longer and cost several billion more.
When it comes to planning, we suffer from a major disparity between our expectations and reality. This article outlines the research behind why we screw up our predictions and gives three suggested techniques to suck less at planning.
The Mechanism:
So what’s going on in our heads when we make these predictions for planning?
On one level, we just don’t expect things to go wrong. Studies have found that we’re biased towards not looking at pessimistic scenarios [Newby-Clark et al., 2000]. We often just assume the best-case scenario when making plans.
Part of the reason may also be due to a memory bias. It seems that we might underestimate how long things take us, even in our memory [Roy, Christenfeld, and McKenzie 2005].
But by far the dominant theory in the field is the idea of an inside view and an outside view [Kahneman and Lovallo 1993]. The inside view is the information you have about your specific project (inside your head). The outside view is what someone else looking at your project (outside of the situation) might say.
We seem to use inside view thinking when we make plans, and this leads to our optimistic predictions. Instead of thinking about all the things that might go wrong, we’re focused on how we can help our project go right.
Still, it’s the outside view that can give us better predictions. And it turns out we don’t even need to do any heavy-lifting in statistics to get better predictions. Just asking other people (from the outside) to predict your own performance, or even just walking through your task from a third-person point of view can improve your predictions [Buehler et al., 2010].
Basically, the difference in our predictions seems to depend on whether we’re looking at the problem in our heads (a first-person view) or outside our heads (a third-person view). Whether we’re the “actor” or the “observer” in our minds seems to be a key factor in our planning [Pronin and Ross 2006].
Debiasing Techniques:
I’ll be covering three ways to improve predictions: Murphyjitsu, Reference Class Forecasting (RCF), and Back-planning. In actuality, they’re all pretty much the same thing; all three techniques focus, on some level, on trying to get more of an outside view. So feel free to choose the one you think works best for you (or do all three).
For each technique, I’ll give an overview and cover the steps first and then end with the research that supports it. They might seem deceptively obvious, but do try to keep in mind that obvious advice can still be helpful!
(Remembering to breathe, for example, is obvious, but you should still do it anyway. If you don’t want to suffocate.)
Murphyjitsu:
“Avoid Obvious Failures”
Almost as good as giving procrastination an ass-kicking.
The name Murphyjitsu comes from the infamous Murphy’s Law: “Anything that can go wrong, will go wrong.” The technique itself is from the Center for Applied Rationality (CFAR), and is designed for “bulletproofing your strategies and plans”.
Here are the basic steps:
Murphyjitsu based off a strategy called a “premortem” or “prospective hindsight”, which basically means imagining the project has already failed and “looking backwards” to see what went wrong [Klein 2007].
It turns out that putting ourselves in the future and looking back can help identify more risks, or see where things can go wrong. Prospective hindsight has been shown to increase our predictive power so we can make adjustments to our plans — before they fail [Mitchell et al., 1989, Veinott et al., 2010].
This seems to work well, even if we’re only using our intuitions. While that might seem a little weird at first (“aren’t our intuitions pretty arbitrary?”), research has shown that our intuitions can be a good source of information in situations where experience is helpful [Klein 1999; Kahneman 2011]*.
While a premortem is usually done on an organizational level, Murphyjitsu works for individuals. Still, it’s a useful way to “failure-proof” your plans before you start them that taps into the same internal mechanisms.
Here’s what Murphyjitsu looks like in action:
“First, let’s say I decide to exercise every day. That’ll be my goal (Step 1). But I should also be more specific than that, so it’s easier to tell what “exercising” means. So I decide that I want to go running on odd days for 30 minutes and do strength training on even days for 20 minutes. And I want to do them in the evenings (Step 2).
Now, let’s imagine that it’s now one week later, and I didn’t go exercising at all! What went wrong? (Step 3) The first thing that comes to mind is that I forgot to remind myself, and it just slipped out of my mind (Step 4). Well, what if I set some phone / email reminders? Is that good enough? (Step 5)
Once again, let’s imagine it’s one week later and I made a reminder. But let’s say I still didn’t got exercising. How surprising is this? (Back to Step 3) Hmm, I can see myself getting sore and/or putting other priorities before it…(Step 4). So maybe I’ll also set aside the same time every day, so I can’t easily weasel out (Step 5).
How do I feel now? (Back to Step 3) Well, if once again I imagine it’s one week later and I once again failed, I’d be pretty surprised. My plan has two levels of fail-safes and I do want to do exercise anyway. Looks like it’s good! (Done)
Reference Class Forecasting:
“Get Accurate Estimates”
Predicting the future…using the past!
Reference class forecasting (RCF)is all about using the outside view. Our inside views tend to be very optimistic: We will see all the ways that things can go right, but none of the ways things can go wrong. By looking at past history — other people who have tried the same or similar thing as us — we can get a better idea of how long things will really take.
Here are the basic steps:
Technically, the actual process for reference class forecasting works a little differently. It involves a statistical distribution and some additional calculations, but for most everyday purposes, the above algorithm should work well enough.
In both cases, we’re trying to take an outside view, which we know improves our estimates [Buehler et al., 1994].
When you Google the average time or look at your own data, you’re forming a “reference class”, a group of related actions that can give you info about how long similar projects tend to take. Hence, the name “reference class forecasting”.
Basically, RCF works by looking only at results. This means that we can avoid any potential biases that might have cropped up if we were to think it through. We’re shortcutting right to the data. The rest of it is basic statistics; most people are close to average. So if we have an idea of what the average looks like, we can be sure we’ll be pretty close to average as well [Flyvbjerg 2006; Flyvbjerg 2008].
The main difference in our above algorithm from the standard one is that this one focuses on your own experiences, so the estimate you get tends to be more accurate than an average we’d get from an entire population.
For example, if it usually takes me about 3 hours to finish homework (I use Toggl to track my time), then I’ll predict that it will take me 3 hours today, too.
It’s obvious that RCF is incredibly simple. It literally just tells you that how long something will take you this time will be very close to how long it took you last time. But that doesn’t mean it’s ineffective! Often, the past is a good benchmark of future performance, and it’s far better than any naive prediction your brain might spit out.
RCF + Murphyjitsu Example:
For me, I’ve found that using a mixture of Reference Class Forecasting and Murphyjitsu to be helpful for reducing overconfidence in my plans.
When starting projects, I will often ask myself, “What were the reasons that I failed last time?” I then make a list of the first three or four “failure-modes” that I can recall. I now make plans to preemptively avoid those past errors.
(This can also be helpful in reverse — asking yourself, “How did I solve a similar difficult problem last time?” when facing a hard problem.)
Here’s an example:
“Say I’m writing a long post (like this one) and I want to know how what might go wrong. I’ve done several of these sorts of primers before, so I have a “reference class” of data to draw from. So what were the major reasons I fell behind for those posts?
<Cue thinking>
Hmm, it looks like I would either forget about the project, get distracted, or lose motivation. Sometimes I’d want to do something else instead, or I wouldn’t be very focused.
Okay, great. Now what are some ways that I might be able to “patch” those problems?
Well, I can definitely start by making a priority list of my action items. So I know which things I want to finish first. I can also do short 5-minute planning sessions to make sure I’m actually writing. And I can do some more introspection to try and see what’s up with my motivation.
Back-planning:
“Calibrate Your Intuitions with Reality”
Back-planning involves, as you might expect, planning from the end. Instead of thinking about where we start and how to move forward, we imagine we’re already at our goal and go backwards.
Here are the steps:
The experimental evidence for back-planning basically suggests that people will predict longer times to start and finish projects.
There are a few interesting hypotheses about why back-planning seems to improve predictions. The general gist of these theories is that back-planning is a weird, counterintuitive way to think about things, which means it disrupts a lot of mental processes that can lead to overconfidence [Wiese et al., 2012].
This means that back-planning can make it harder to fall into the groove of the easy “best-case” planning we default to. Instead, we need to actually look at where things might go wrong. Which is, of course, what we want.
In my own experience, I’ve found that going through a quick back-planning session can help my intuitions “warm up” to my prediction more. As in, I’ll get an estimation from RCF, but it still feels “off”. Walking through the plan through back-planning can help all the parts of me understand that it really will probably take longer.
Here’s the back-planning example:
“Right now, I want to host a talk at my school. I know that’s the end goal (Step 1). So the end goal is me actually finishing the talk and taking questions (Step 2). What happens right before that? (Step 3). Well, people would need to actually be in the room. And I would have needed a room.
Is that all? (Step 3). Also, for people to show up, I would have needed publicity. Probably also something on social media. I’d need to publicize at least a week in advance, or else it won’t be common knowledge.
And what about the actual talk? I would have needed slides, maybe memorize my talk. Also, I’d need to figure out what my talk is actually going to be on.
Huh, thinking it through like this, I’d need something like 3 weeks to get it done. One week for the actual slides, one week for publicity (at least), and one week for everything else that might go wrong.
That feels more ‘right’ than my initial estimate of ‘I can do this by next week.’”
Experimental Ideas:
Murphyjitsu, Reference Class Forecasting, and Back-planning are the three debiasing techniques that I’m fairly confident work well. This section is far more anecdotal. They’re ideas that I think are useful and interesting, but I don’t have much formal backing for them.
Decouple Predictions From Wishes:
In my own experience, I often find it hard to separate when I want to finish a task versus when I actually think I will finish a task. This is a simple distinction to keep in mind when making predictions, and I think it can help decrease optimism. The most important number, after all, is when I actually think I will finish—it’s what’ll most likely actually happen.
There’s some evidence suggesting that “wishful thinking” could actually be responsible for some poor estimates but it’s far from definitive [Buehler et al., 1997, Krizan and Windschitl].
Incentivize Correct Predictions:
Lately, I’ve been using a 4-column chart for my work. I write down the task in Column 1 and how long I think it will take me in Column 2. Then I go and do the task. After I’m done, I write down how long it actually took me in Column 3. Column 4 is the absolute value of Column 2 minus Column 3, or my “calibration score”.
The idea is to minimize my score every day. It’s simple and it’s helped me get a better sense for how long things really take.
Plan For Failure:
In my schedules, I specifically write in “distraction time”. If you aren’t doing this, you may want to consider doing this. Most of us (me included) have wandering attentions, and I know I’ll lost at least some time to silly things every day.
Double Your Estimate:
I get it. The three debiasing techniques I outlined above can sometimes take too long. In a pinch, you can probably approximate good predictions by just doubling your naive prediction.
Most people tend to be less than 2X overconfident, but I think (pessimistically) sticking to doubling is probably still better than something like 1.5X.
Working in Groups:
Obviously because groups are made of individuals, we’d expect them to be susceptible to the same overconfidence biases I covered earlier. Though some research has shown that groups are less susceptible to bias, more studies have shown that group predictions can be far more optimistic than individual predictions [Wright and Wells, Buehler et al., 2010]. “Groupthink” is term used to describe the observed failings of decision making in groups [Janis].
Groupthink (and hopefully also overconfidence), can be countered by either assigning a “Devil’s Advocate” or engaging in “dialectical inquiry” [Lunenburg 2012]:
We give out more than cookies over here
A Devil’s Advocate is a person who is actively trying to find fault with the group’s plans, looking for holes in reasoning or other objections. It’s suggested that the role rotates, and it’s associated with other positives like improved communication skills.
A dialectical inquiry is where multiple teams try to create the best plan, and then present them. Discussion then happens, and then the group selects the best parts of each plan . It’s a little like building something awesome out of lots of pieces, like a giant robot.
For both strategies, research has shown that they lead to “higher-quality recommendations and assumptions” (compared to not doing them), although it can also reduce group satisfaction and acceptance of the final decision [Schweiger et al. 1986].
(Pretty obvious though; who’d want to keep chatting with someone hell-bent on poking holes in your plan?)
Conclusion:
If you’re interested in learning (even) more about the planning fallacy, I’d highly recommend the paper The Planning Fallacy: Cognitive, Motivational, and Social Origins by Roger Buehler, Dale Griffin, and Johanna Peetz. Most of the material in this guide here is was taken from their paper. Do go check it out! It’s free!
Remember that everyone is overconfident (you and me included!), and that failing to plan is the norm. There are scary unknown unknowns out there that we just don’t know about!
Good luck and happy planning!
Footnotes:
* Just don’t go and start buying lottery tickets with your gut. We’re talking about fairly “normal” things like catching a ball, where your intuitions give you accurate predictions about where the ball will land. (Instead of, say, calculating the actual projectile motion equation in your head.)
** In a pinch, you can just use your memory, but studies have shown that our memory tends to be biased too. So as often as possible, try to use actual measurements and numbers from past experience.
Works Cited:
Buehler, Roger, Dale Griffin, and Johanna Peetz. “The Planning Fallacy: Cognitive,
Motivational, and Social Origins.” Advances in Experimental Social Psychology 43 (2010): 1-62. Social Science Research Network.
Buehler, Roger, Dale Griffin, and Michael Ross. “Exploring the Planning Fallacy: Why People
Underestimate their Task Completion Times.” Journal of Personality and Social Psychology 67.3 (1994): 366.
Buehler, Roger, Dale Griffin, and Heather MacDonald. “The Role of Motivated Reasoning in
Optimistic Time Predictions.” Personality and Social Psychology Bulletin 23.3 (1997): 238-247.
Buehler, Roger, Dale Griffin, and Michael Ross. “It’s About Time: Optimistic Predictions in
Work and Love.” European Review of Social Psychology Vol. 6, (1995): 1–32
Buehler, Roger, et al. “Perspectives on Prediction: Does Third-Person Imagery Improve Task
Completion Estimates?.” Organizational Behavior and Human Decision Processes 117.1 (2012): 138-149.
Buehler, Roger, Dale Griffin, and Michael Ross. “Inside the Planning Fallacy: The Causes and
Consequences of Optimistic Time Predictions.” Heuristics and Biases: The Psychology of Intuitive Judgment (2002): 250-270.
Buehler, R., & Griffin, D. (2003). Planning, Personality, and Prediction: The Role of Future
Focus in Optimistic Time Predictions. Organizational Behavior and Human Decision Processes, 92, 80–90
Flyvbjerg, Bent. “From Nobel Prize to Project Management: Getting Risks Right.” Project
Management Journal 37.3 (2006): 5-15. Social Science Research Network.
Flyvbjerg, Bent. “Curbing Optimism Bias and Strategic Misrepresentation in Planning:
Reference Class Forecasting in Practice.” European Planning Studies 16.1 (2008): 3-21.
Janis, Irving Lester. “Groupthink: Psychological Studies of Policy Decisions and Fiascoes.”
(1982).
Johnson, Dominic DP, and James H. Fowler. “The Evolution of Overconfidence.” Nature
477.7364 (2011): 317-320.
Kahneman, Daniel. Thinking, Fast and Slow. Macmillan, 2011.
Kahneman, Daniel, and Dan Lovallo. “Timid Choices and Bold Forecasts: A Cognitive
Perspective on Risk Taking.” Management Science 39.1 (1993): 17-31.
Klein, Gary. Sources of power: How People Make Decisions. MIT press, 1999.
Klein, Gary. “Performing a Project Premortem.” Harvard Business Review 85.9 (2007): 18-19.
Krizan, Zlatan, and Paul D. Windschitl. “Wishful Thinking About the Future: Does Desire
Impact Optimism?” Social and Personality Psychology Compass 3.3 (2009): 227-243.
Lunenburg, F. “Devil’s Advocacy and Dialectical Inquiry: Antidotes to Groupthink.”
International Journal of Scholarly Academic Intellectual Diversity 14 (2012): 1-9.
Mitchell, Deborah J., J. Edward Russo, and Nancy Pennington. “Back to the Future: Temporal
Perspective in the Explanation of Events.” Journal of Behavioral Decision Making 2.1 (1989): 25-38.
Newby-Clark, Ian R., et al. “People focus on Optimistic Scenarios and Disregard Pessimistic
Scenarios While Predicting Task Completion Times.” Journal of Experimental Psychology: Applied 6.3 (2000): 171.
Pronin, Emily, and Lee Ross. “Temporal Differences in Trait Self-Ascription: When the Self is
Seen as an Other.” Journal of Personality and Social Psychology 90.2 (2006): 197.
Roy, Michael M., Nicholas JS Christenfeld, and Craig RM McKenzie. “Underestimating the
Duration of Future Events: Memory Incorrectly Used or Memory Bias?.” Psychological Bulletin 131.5 (2005): 738.
Schweiger, David M., William R. Sandberg, and James W. Ragan. “Group Approaches for
Improving Strategic Decision Making: A Comparative Analysis of Dialectical Inquiry,
Devil’s Advocacy, and Consensus.” Academy of Management Journal 29.1 (1986): 51-71.
Veinott, Beth. “Klein, and Sterling Wiggins,“Evaluating the Effectiveness of the Premortem
Technique on Plan Confidence,”.” Proceedings of the 7th International ISCRAM Conference (May, 2010).
Wiese, Jessica, Roger Buehler, and Dale Griffin. “Backward Planning: Effects of Planning
Direction on Predictions of Task Completion Time.” Judgment and Decision Making 11.2
(2016): 147.
Wright, Edward F., and Gary L. Wells. “Does Group Discussion Attenuate the Dispositional
Bias?.” Journal of Applied Social Psychology 15.6 (1985): 531-546.