in a classic experiment, 37 psychology students were asked to estimate how long it would take them to finish their senior theses “if everything went as poorly as it possibly could,” and they still underestimated the time it would take, as a group (the average prediction was 48.6 days, and the average actual completion time was 55.5 days).
That’s nuts. Does anyone really think that “if everything went as poorly as it possibly could” that the thesis would ever get done at all? It’s so bizarre it makes me question whether the students actually understood what they were being asked.
IME the sense that this is nuts seems to be a quirk of STEM thinking. In practice, most non-rationalists seem to interpret “How long will this take if everything goes as poorly as possible?” as something like “Assume it gets done but the process is super shitty. How long will it take?”
It’s a quirk of rationalist culture (and a few others — I’ve seen this from physicists too) to take the words literally and propose that “infinitely long” is a plausible answer, and be baffled as to how anyone could think otherwise.
Many smart political science and English majors don’t seem to go down that line of reasoning, for instance.
“Assume it gets done but the process is super shitty. How long will it take?”
I would, in fact, consider that interpretation to be unambiguously “not understanding what they were being asked” , given the question in the post. Not understanding what is being asked is something that happens a fair bit.
I’ll give you that if they had asked “assuming it gets done but everything goes as poorly as possible, how long did it take?”, it takes a bit of a strange mind to look for some weird scenario that strings things along for years, or centuries, or eons. But “never” is something that happens quite often. Even given that people do respond with (overly optimistic) numbers, I’m not convinced that “never” just didn’t occur to them because they were insufficiently motivated to give a correct answer.
If what you say is true, though, that’s a bit discouraging for the prospects of communicating with even smart political science and English majors. I don’t suppose you know some secret way to get them to respond to what you actually say, not something else kind of similar?
I would, in fact, consider that interpretation to be unambiguously “not understanding what they were being asked” , given the question in the post.
Two points:
“Not understanding what they were being asked” isn’t an explanation. This is a common (really, close-to-universal) gap in people’s attempts to understand one another. If these students didn’t “understand” (noting the ambiguity about what exactly that means), what were they doing instead? “Being stupid”? “Not thinking”? “Falling prey to biases”? None of this tells you what they were doing.
I think what I described is a perfectly fine way for someone to interpret the question.
For that second point, let’s look at the wording again:
…psychology students were asked to estimate how long it would take them to finish their senior theses “if everything went as poorly as it possibly could,”…
“How long it would take”. Which someone could reasonably interpret as implying that it would, in fact, get done. (Many (most?) people would consider the question “How long does this take if it never gets done?” to be gibberish.)
And in the context of a senior thesis, it’s useless to think about it in terms of it getting done in (say) 20 years. It’s gotta get done soon enough to graduate. I think in most cases within a single college term?
So given the context of the question, there are some reasonable constraints a mind might put on the question.
…which you’re interpreting as them “unambiguously” not understanding what was actually said.
This attitude about there being an objectively correct interpretation of what was said, and that it matches your interpretation of what was said, and that when people hear something else they’re making a mistake, is another way to describe the STEM thing I was talking about.
This works in STEM disciplines basically by definition. It’s actually a great power tool. You troubleshoot differences in interpretation by coming to shared agreements about how language works and what concepts to use to interpret things. It makes sense to find out who was in fact wrong. And you do so in part by emphasizing and presenting the argument that justifies your interpretation.
But that communication strategy doesn’t work in most places.
The “secret way” I use to communicate with non-rationalists is to assume absolutely everything they do and think makes sense on the inside, and I try to understand that way of making sense. It’s not about making them interpret things differently; that’s STEM thinking and only works with others who have agreed to STEM communication protocols. Instead, I try to get some clear sense of what it’s like to be them, and I account for that in how I communicate.
And in particular, I have to make a point of setting aside my confidence that I’m using correct interpretation protocols (or even that there is such a thing) and that they’re not responding to the words that were actually said. That’s anti-helpful for communication.
Given the context, I imagine what they were doing is making up a number that was bigger than another number they’d just made up. Humans are cognitive misers. A student would correctly guess that it doesn’t really matter if they get this question right and not try very hard. That’s actually what I would do in a context where it was clear that a numeric answer was required, I was expected to spend little time answering, and I was motivated not to leave that particular question blank.
My answer of “never” also took little thought (for me). I thought a bit more about it, and if I did interpret it as assuming the thesis gets done (which, yes, one can interpret it that way), then the answer would be “however many days it is until the last date on which the thesis will be accepted”. Which is also not the answer the students gave.
It’s a bad question that I don’t think it would ever even occur to anyone to ask if they weren’t trying to have a number that they could aggregate into a meaningless (but impressive-sounding) statistic. It’s in the vicinity of a much more useful question, which is “what could go poorly to make this thesis take a long time / longer than expected”. And sure, you could answer the original question by decomposing it into “what could go wrong” and “if all of those actually did go wrong, how long would it take”. And if you did that you’d have the answer to the first question, which is actually useful, regardless of how accurate the answer to the second is. But that’s the actual point of this whole post, right? And in fact the reason for citing that statistic at all is that it appears to demonstrate that these students were not doing that?
If we look at the student answers, they were off by ~7 days, or about a 14% error from the actual completion time.
The only way I can interpret your post is that you’re suggesting all of these students should have answered “never”.
I’m not convinced that “never” just didn’t occur to them because they were insufficiently motivated to give a correct answer.
How far off is “never” from the true answer of 55.5 days?
It’s about infinitely far off. It is an infinitely wrong answer. Even if a project ran 1000% over every worst-case pessimistic schedule, any finite prediction was still infinitely closer than “never”.
It’s a quirk of rationalist culture (and a few others — I’ve seen this from physicists too) to take the words literally and propose that “infinitely long” is a plausible answer, and be baffled as to how anyone could think otherwise.
That’s because “infinitely long” is a trivial answer for any task that isn’t literally impossible.[1] It provides 0 information and takes 0 computational effort. It might as well be the answer from a non-entity, like asking a brick wall how long the thesis could take to complete.
Question: How long can it take to do X? Brick wall:Forever. Just go do not-X instead.
It is much more difficult to give an answer for how long a task can take assuming it gets done while anticipating and predicting failure modes that would cause the schedule to explode, and that same answer is actually useful since you can now take preemptive actions to avoid those failure modes—which is the whole point of estimating and scheduling as a logical exercise.
The actual conversation that happens during planning is
A: “What’s the worst case for this task?” B: “6 months.” A:“Why?” B: “We don’t have enough supplies to get past 3 trial runs, so if any one of them is a failure, the lead time on new materials with our current vendor is 5 months.” A:“Can we source a new vendor?” B: “No, but… <some other idea>”
In cases when something is literally impossible, instead of saying “infinitely long”, or “never”, it’s more useful to say “that task is not possible” and then explain why. Communication isn’t about finding the “haha, gotcha” answer to a question when asked.
Yes, given that question, IMO they should have answered “never”. 55.5 days isn’t the true answer, because in reality everything didn’t go as poorly as possible. You’re right, it’s a bad question that a brick wall would do a better job of answering correctly than a human who’s trying to be helpful.
The answer to your question is useful, but not because of the number. “What could go wrong to make this take longer than expected?” would elicit the same useful information without spuriously forcing a meaningless number to be produced.
I have a sense that this is a disagreement about how to decide what words “really” mean, and I have a sense that I disagree with you about how to do that.
“What could go wrong to make this take longer than expected?” would elicit the same useful information without spuriously forcing a meaningless number to be produced.
It is false that that question would elicit the same useful information. Quoting from something I previously wrote elsewhere:
Sometimes, we ask questions we think should work, but they don’t.
e.g. “What could go wrong?” is not a question that works for most people, most of the time.
It turns out, though, that if you just tell people “guess what—I bring to you a message from the future. The plan failed.” … it turns out that, in many cases, you can follow up this blunt assertion with the question “What happened?” and this question does work.
(The difference between “What could go wrong?” and “It went wrong—what happened?” is so large that it became the centerpiece of one of CFAR’s four most popular, enduring, and effective classes.)
Similarly: I’ve noticed that people sometimes ask me “What do you think?” or “Do you have any feedback?”
And it’s not, in fact, the case that my brain possesses none of the information they seek. The information is in there, lurking (just as people really do, in some sense, know what could go wrong).
But the question doesn’t work, for me. It doesn’t successfully draw out that knowledge. Some other question must be asked, to cause the words to start spilling out of my mouth. “Will a sixth grader understand this?”, maybe. Or “If I signed your name at the bottom of this essay and told everyone you wrote it, that’d be okay, right?”
Right. I think I agree with everything you wrote here, but here it is again in my own words:
In communicating with people, the goal isn’t to ask a hypothetically “best” question and wonder why people don’t understand or don’t respond in the “correct” way. The goal is to be understood and to share information and acquire consensus or agree on some negotiation or otherwise accomplish some task.
This means that in real communication with real people, you often need to ask different questions to different people to arrive at the same information, or phrase some statement differently for it to be understood. There shouldn’t be any surprise or paradox here. When I am discussing an engineering problem with engineers, I phrase it in the terminology that engineers will understand. When I need to communicate that same problem to upper management, I do not use the same terminology that I use with my engineers.
Likewise, there’s a difference when I’m communicating with some engineering intern or new grad right out of college, vs a senior engineer with a decade of experience. I tailor my speech for my audience.
In particular, if I asked this question to Kenoubi (“what’s the worst case for how long this thesis could take you?”), and Kenoubi replied “It never finishes”, then I would immediately follow up with the question, “Ok, considering cases when it does finish, what’s the worst-case look like?” And if that got the reply “the day before it is required to be due”, I would then start poking at “What would would cause that to occur?”.
The reason why I start with the first question is because it works for, I don’t know, 95% of people I’ve ever interacted with in my life? In my mind, it’s rational to start with a question that almost always elicits the information I care about, even if there’s some small subset of the population that will force me to choose my words as if they’re being interpreted by a Monkey’s paw.
First, consider the question of, “are these predictions totally useless?” This is an important question because I stand by my claim that the answer of “never” is actually totally useless due to how trivial it is.
Despite the optimistic bias, respondents’ best estimates were by no means devoid of information: The predicted completion times were highly correlated with actual completion times (r = .77, p < .001). Compared with others in the sample, respondents who predicted that they would take more time to finish actually did take more time. Predictions can be informative even in the presence of a marked prediction bias.
...
Respondents’ optimistic and pessimistic predictions were both strongly correlated with their actual completion times (rs = .73 and .72, respectively; ps < .01).
Yep. Matches my experience.
We know that only 11% of students met their optimistic targets, and only 30% of students met their “best guess” targets. What about the pessimistic target? It turns out, 50% of the students did finish by that target. That’s not just a quirk, because it’s actually related to the distribution itself.
However, the distribution of difference scores from the best-guess predictions were markedly skewed, with a long tail on the optimistic side of zero, a cluster of scores within 5 or 10 days of zero, and virtually no scores on the pessimistic side of zero. In contrast, the differences from the worst-case predictions were noticeably more symmetric around zero, with the number of markedly pessimistic predictions balancing the number of extremely optimistic predictions.
In other words, asking people for a best guess or an optimistic prediction results in a biased prediction that is almost always earlier than a real delivery date. On the other hand, while the pessimistic question is not more accurate (it has the same absolute error margins), it isunbiased. The reality is that the study says that people asked for a pessimistic question were equally likely to over-estimate their deadline as they were to under-estimate it. If you don’t think a question that gives you a distribution centered on the right answer is useful, I’m not sure what to tell you.
The paper actually did a number of experiments. That was just the first.
In the third experiment, the study tried to understand what people are thinking about when estimating.
Proportionally more responses concerned future scenarios (M = .74) than relevant past experiences (M =.07), r(66) = 13.80, p < .001. Furthermore, a much higher proportion of subjects’ thoughts involved planning for a project and imagining its likely progress (M =.71) rather than considering potential impediments (M = .03), r(66) = 18.03, p < .001.
This seems relevant considering that the idea of premortems or “worst case” questioning is to elicit impediments, and the project managers / engineering leads doing that questioning are intending to hear about impediments and will continue their questioning until they’ve been satisfied that the group is actually discussing that.
In the fourth experiment, the study tries to understand why it is that people don’t think about their past experiences. They discovered that just prompting people to consider past experiences was insufficient, they actually needed additional prompting to make their past experience “relevant” to their current task.
Subsequent comparisons revealed that subjects in the recall-relevant condition predicted they would finish the assignment later than subjects in either the recall condition, t(79) = 1.99, p < .05, or the control condition, f(80) = 2.14, p < .04, which did not differ significantly from each other, t(& 1) < 1
...
Further analyses were performed on the difference between subjects’ predicted and actual completion times. Subjects underestimated their completion times significantly in the control (M = −1.3 days), r(40) = 3.03, p < .01, and recall conditions (M = −1.0 day), t(41) = 2.10, p < .05, but not in the recall-relevant condition (M = −0.1 days), ((39) < i. Moreover, a higher percentage of subjects finished the assignments in the predicted time in the recall-relevant condition (60.0%) than in the recall and control conditions (38.1% and 29.3%, respectively), x2G, N = 123) = 7.63, p < .01. The latter two conditions did not differ significantly from each other.
...
The absence of an effect in the recall condition is rather remarkable. In this condition, subjects first described their past performance with projects similar to the computer assignment and acknowledged that they typically finish only 1 day before deadlines. Following a suggestion to “keep in mind previous experiences with assignments,” they then predicted when they would finish the computer assignment. Despite this seemingly powerful manipulation, subjects continued to make overly optimistic forecasts. Apparently, subjects were able to acknowledge their past experiences but disassociate those episodes from their present predictions. In contrast, the impact of the recall-relevant procedure was sufficiently robust to eliminate the optimistic bias in both deadline conditions
How does this compare to the first experiment?
Interestingly, although the completion estimates were less biased in the recall-relevant condition than in the other conditions, they were not more strongly correlated with actual completion times, nor was the absolute prediction error any smaller. The optimistic bias was eliminated in the recall-relevant condition because subjects’ predictions were as likely to be too long as they were to be too short. The effects of this manipulation mirror those obtained with the instruction to provide pessimistic predictions in the first study: When students predicted the completion date for their honor’s thesis on the assumption that “everything went as poorly as it possibly could” they produced unbiased but no more accurate predictions than when they made their “best guesses.”
It’s common in engineering to perform group estimates. Does the study look at that? Yep, the fifth and last experiment asks individuals to estimate the performance of others.
As hypothesized, observers seemed more attuned to the actors’ base rates than did the actors themselves. Observers spontaneously used the past as a basis for predicting actors’ task completion times and produced estimates that were later than both the actors’ estimates and their completion times.
So observers are more pessimistic. Actually, observers are so pessimistic that you have to average it with the optimistic estimates to get an unbiased estimate.
One of the most consistent findings throughout our investigation was that manipulations that reduced the directional (optimistic) bias in completion estimates were ineffective in in- creasing absolute accuracy. This implies that our manipulations did not give subjects any greater insight into the particular predictions they were making, nor did they cause all subjects to become more pessimistic (see Footnote 2), but instead caused enough subjects to become overly pessimistic to counterbalance the subjects who remained overly optimistic. It remains for future research to identify those factors that lead people to make more accurate, as well as unbiased, predictions. In the real world, absolute accuracy is sometimes not as important as (a) the proportion of times that the task is completed by the “best-guess” date and (b) the proportion of dramatically optimistic, and therefore memorable, prediction failures. By both of these criteria, factors that decrease the optimistic bias “improve” the quality of intuitive prediction.
At the end of the day, there are certain things that are known about scheduling / prediction.
In general, individuals are as wrong as they are right for any given estimate.
In general, people are overly optimistic.
But, estimates generally correlate well with actual duration—if an individual thinks something is longer in estimate than another task, it most likely is! This is why in SW sometimes estimation is not in units of time at all, but in a concept called “points”.
The larger and more nebulously scoped the task, the worse any estimates will be in absolute error.
The length of a time a task can take follows a distribution with a very long right tail—a task that takes way longer than expected can take an arbitrary amount of time, but the fastest time to complete a task is limited.
The best way to actually schedule or predict a project is to break it down into as many small component tasks as possible, identify dependencies between those tasks, and produce most likely, optimistic, and pessimistic estimates for each task, and then run a simulation for chain of dependencies to see what the expected project completion looks like. Use a Gantt chart. This is a boring answer because it’s the “learn project management” answer, and people will hate on it because gesture vaguely to all of the projects that overrun their schedule. There are many interesting reasons for why that happens and why I don’t think it’s a massive failure of rationality, but I’m not sure this comment is a good place to go into detail on that. The quick answer is that comical overrun of a schedule has less to do with an inability to create correct schedules from an engineering / evidence-based perspective, and much more to do with a bureaucratic or organizational refusal to accept an evidence-based schedule when a totally false but politically palatable “optimistic” schedule is preferred.
The best way to actually schedule or predict a project is to break it down into as many small component tasks as possible, identify dependencies between those tasks, and produce most likely, optimistic, and pessimistic estimates for each task, and then run a simulation for chain of dependencies to see what the expected project completion looks like. Use a Gantt chart. This is a boring answer because it’s the “learn project management” answer, and people will hate on it because gesture vaguely to all of the projects that overrun their schedule. There are many interesting reasons for why that happens and why I don’t think it’s a massive failure of rationality, but I’m not sure this comment is a good place to go into detail on that. The quick answer is that comical overrun of a schedule has less to do with an inability to create correct schedules from an engineering / evidence-based perspective, and much more to do with a bureaucratic or organizational refusal to accept an evidence-based schedule when a totally false but politically palatable “optimistic” schedule is preferred.
I definitely agree that this is the way to get the most accurate prediction practically possible, and that organizational dysfunction often means this isn’t used, even when the organization would be better able to achieve its goals with an accurate prediction. But I also think that depending on the type of project, producing an accurate Gantt chart may take a substantial fraction of the effort (or even a substantial fraction of the wall-clock time) of finishing the entire project, or may not even be possible without already having some of the outputs of the processes earlier in the chart. These aren’t necessarily possible to eradicate, so the take-away, I think, is not to be overly optimistic about the possibility of getting accurate schedules, even when there are no ill intentions and all known techniques to make more accurate schedules are used.
In other words, asking people for a best guess or an optimistic prediction results in a biased prediction that is almost always earlier than a real delivery date. On the other hand, while the pessimistic question is not more accurate (it has the same absolute error margins), it is unbiased. The reality is that the study says that people asked for a pessimistic question were equally likely to over-estimate their deadline as they were to under-estimate it. If you don’t think a question that gives you a distribution centered on the right answer is useful, I’m not sure what to tell you.
It’s interesting that the median of the pessimistic expectations is about equal to the median of the actual results. The mean clearly wasn’t, as that discrepancy was literally the point of citing this statistic in the OP:
in a classic experiment, 37 psychology students were asked to estimate how long it would take them to finish their senior theses “if everything went as poorly as it possibly could,” and they still underestimated the time it would take, as a group (the average prediction was 48.6 days, and the average actual completion time was 55.5 days).
So the estimates were biased, but not median-biased (at least that’s what Wikipedia appears to say the terminology is). Less biased than other estimates, though. Of course this assumes we’re taking the answer to “how long would it take if everything went as poorly as it possibly could” and interpreting it as the answer to “how long will it actually take”, and if students were actually asked after the fact if everything went as poorly as it possibly could, I predict they would mostly say no. And treating the text “if everything went as poorly as it possibly could” as if it wasn’t even there is clearly wrong too, because they gave a different (more biased towards optimism) answer if it was omitted.
This specific question seems kind of hard to make use of from a first-person perspective. But I guess maybe as a third party one could ask for worst-possible estimates and then treat them as median-unbiased estimators of what will actually happen? Though I also don’t know if the median-unbiasedness is a happy accident. (It’s not just a happy accident, there’s something there, but I don’t know whether it would generalize to non-academic projects, projects executed by 3rd parties rather than oneself, money rather than time estimates, etc.)
I do still also think there’s a question of how motivated the students were to give accurate answers, although I’m not claiming that if properly motivated they would re-invent Murphyjitsu / the pre-mortem / etc. from whole cloth; they’d probably still need to already know about some technique like that and believe it could help get more accurate answers. But even if a technique like that is an available action, it sounds like a lot of work, only worth doing if the output has a lot of value (e.g. if one suspects a substantial chance of not finishing the thesis before it’s due, one might wish to figure out why so one could actively address some of the reasons).
I have a sense that this is a disagreement about how to decide what words “really” mean, and I have a sense that I disagree with you about how to do that.
I had already (weeks ago) approvingly cited and requested for my wife and my best friend to read that particular post, which I think puts it at 99.5th percentile or higher of LW posts in terms of my wanting its message to be understood and taken to heart, so I think I disagree with this comment about as strongly as is possible.
I simply missed the difference between “what could go wrong” and “you failed, what happened” while I was focusing on the difference between “what could go wrong” and “how long could it take if everything goes as poorly as possible”.
You’re right—“you failed, what happened” does create a mental frame that “what could go wrong” does not. I don’t think “how long could it take if everything goes as poorly as possible” creates any more useful of a frame than “you failed, what happened”. But it does, formally, request a number. I don’t think that number, itself, is good for anything. I’m not even convinced asking for that number is very effective for eliciting the “you failed, what happened” mindset. I definitely don’t think it’s more effective for that than just asking directly “you failed, what happened”.
That’s nuts. Does anyone really think that “if everything went as poorly as it possibly could” that the thesis would ever get done at all? It’s so bizarre it makes me question whether the students actually understood what they were being asked.
IME the sense that this is nuts seems to be a quirk of STEM thinking. In practice, most non-rationalists seem to interpret “How long will this take if everything goes as poorly as possible?” as something like “Assume it gets done but the process is super shitty. How long will it take?”
It’s a quirk of rationalist culture (and a few others — I’ve seen this from physicists too) to take the words literally and propose that “infinitely long” is a plausible answer, and be baffled as to how anyone could think otherwise.
Many smart political science and English majors don’t seem to go down that line of reasoning, for instance.
I would, in fact, consider that interpretation to be unambiguously “not understanding what they were being asked” , given the question in the post. Not understanding what is being asked is something that happens a fair bit.
I’ll give you that if they had asked “assuming it gets done but everything goes as poorly as possible, how long did it take?”, it takes a bit of a strange mind to look for some weird scenario that strings things along for years, or centuries, or eons. But “never” is something that happens quite often. Even given that people do respond with (overly optimistic) numbers, I’m not convinced that “never” just didn’t occur to them because they were insufficiently motivated to give a correct answer.
If what you say is true, though, that’s a bit discouraging for the prospects of communicating with even smart political science and English majors. I don’t suppose you know some secret way to get them to respond to what you actually say, not something else kind of similar?
Two points:
“Not understanding what they were being asked” isn’t an explanation. This is a common (really, close-to-universal) gap in people’s attempts to understand one another. If these students didn’t “understand” (noting the ambiguity about what exactly that means), what were they doing instead? “Being stupid”? “Not thinking”? “Falling prey to biases”? None of this tells you what they were doing.
I think what I described is a perfectly fine way for someone to interpret the question.
For that second point, let’s look at the wording again:
“How long it would take”. Which someone could reasonably interpret as implying that it would, in fact, get done. (Many (most?) people would consider the question “How long does this take if it never gets done?” to be gibberish.)
And in the context of a senior thesis, it’s useless to think about it in terms of it getting done in (say) 20 years. It’s gotta get done soon enough to graduate. I think in most cases within a single college term?
So given the context of the question, there are some reasonable constraints a mind might put on the question.
…which you’re interpreting as them “unambiguously” not understanding what was actually said.
This attitude about there being an objectively correct interpretation of what was said, and that it matches your interpretation of what was said, and that when people hear something else they’re making a mistake, is another way to describe the STEM thing I was talking about.
This works in STEM disciplines basically by definition. It’s actually a great power tool. You troubleshoot differences in interpretation by coming to shared agreements about how language works and what concepts to use to interpret things. It makes sense to find out who was in fact wrong. And you do so in part by emphasizing and presenting the argument that justifies your interpretation.
But that communication strategy doesn’t work in most places.
The “secret way” I use to communicate with non-rationalists is to assume absolutely everything they do and think makes sense on the inside, and I try to understand that way of making sense. It’s not about making them interpret things differently; that’s STEM thinking and only works with others who have agreed to STEM communication protocols. Instead, I try to get some clear sense of what it’s like to be them, and I account for that in how I communicate.
And in particular, I have to make a point of setting aside my confidence that I’m using correct interpretation protocols (or even that there is such a thing) and that they’re not responding to the words that were actually said. That’s anti-helpful for communication.
Given the context, I imagine what they were doing is making up a number that was bigger than another number they’d just made up. Humans are cognitive misers. A student would correctly guess that it doesn’t really matter if they get this question right and not try very hard. That’s actually what I would do in a context where it was clear that a numeric answer was required, I was expected to spend little time answering, and I was motivated not to leave that particular question blank.
My answer of “never” also took little thought (for me). I thought a bit more about it, and if I did interpret it as assuming the thesis gets done (which, yes, one can interpret it that way), then the answer would be “however many days it is until the last date on which the thesis will be accepted”. Which is also not the answer the students gave.
It’s a bad question that I don’t think it would ever even occur to anyone to ask if they weren’t trying to have a number that they could aggregate into a meaningless (but impressive-sounding) statistic. It’s in the vicinity of a much more useful question, which is “what could go poorly to make this thesis take a long time / longer than expected”. And sure, you could answer the original question by decomposing it into “what could go wrong” and “if all of those actually did go wrong, how long would it take”. And if you did that you’d have the answer to the first question, which is actually useful, regardless of how accurate the answer to the second is. But that’s the actual point of this whole post, right? And in fact the reason for citing that statistic at all is that it appears to demonstrate that these students were not doing that?
If we look at the student answers, they were off by ~7 days, or about a 14% error from the actual completion time.
The only way I can interpret your post is that you’re suggesting all of these students should have answered “never”.
How far off is “never” from the true answer of 55.5 days?
It’s about infinitely far off. It is an infinitely wrong answer. Even if a project ran 1000% over every worst-case pessimistic schedule, any finite prediction was still infinitely closer than “never”.
That’s because “infinitely long” is a trivial answer for any task that isn’t literally impossible.[1] It provides 0 information and takes 0 computational effort. It might as well be the answer from a non-entity, like asking a brick wall how long the thesis could take to complete.
Question: How long can it take to do X?
Brick wall: Forever. Just go do not-X instead.
It is much more difficult to give an answer for how long a task can take assuming it gets done while anticipating and predicting failure modes that would cause the schedule to explode, and that same answer is actually useful since you can now take preemptive actions to avoid those failure modes—which is the whole point of estimating and scheduling as a logical exercise.
The actual conversation that happens during planning is
A: “What’s the worst case for this task?”
B: “6 months.”
A: “Why?”
B: “We don’t have enough supplies to get past 3 trial runs, so if any one of them is a failure, the lead time on new materials with our current vendor is 5 months.”
A: “Can we source a new vendor?”
B: “No, but…
<some other idea>
”In cases when something is literally impossible, instead of saying “infinitely long”, or “never”, it’s more useful to say “that task is not possible” and then explain why. Communication isn’t about finding the “haha, gotcha” answer to a question when asked.
Yes, given that question, IMO they should have answered “never”. 55.5 days isn’t the true answer, because in reality everything didn’t go as poorly as possible. You’re right, it’s a bad question that a brick wall would do a better job of answering correctly than a human who’s trying to be helpful.
The answer to your question is useful, but not because of the number. “What could go wrong to make this take longer than expected?” would elicit the same useful information without spuriously forcing a meaningless number to be produced.
I have a sense that this is a disagreement about how to decide what words “really” mean, and I have a sense that I disagree with you about how to do that.
https://www.lesswrong.com/posts/57sq9qA3wurjres4K/ruling-out-everything-else
It is false that that question would elicit the same useful information. Quoting from something I previously wrote elsewhere:
Right. I think I agree with everything you wrote here, but here it is again in my own words:
In communicating with people, the goal isn’t to ask a hypothetically “best” question and wonder why people don’t understand or don’t respond in the “correct” way. The goal is to be understood and to share information and acquire consensus or agree on some negotiation or otherwise accomplish some task.
This means that in real communication with real people, you often need to ask different questions to different people to arrive at the same information, or phrase some statement differently for it to be understood. There shouldn’t be any surprise or paradox here. When I am discussing an engineering problem with engineers, I phrase it in the terminology that engineers will understand. When I need to communicate that same problem to upper management, I do not use the same terminology that I use with my engineers.
Likewise, there’s a difference when I’m communicating with some engineering intern or new grad right out of college, vs a senior engineer with a decade of experience. I tailor my speech for my audience.
In particular, if I asked this question to Kenoubi (“what’s the worst case for how long this thesis could take you?”), and Kenoubi replied “It never finishes”, then I would immediately follow up with the question, “Ok, considering cases when it does finish, what’s the worst-case look like?” And if that got the reply “the day before it is required to be due”, I would then start poking at “What would would cause that to occur?”.
The reason why I start with the first question is because it works for, I don’t know, 95% of people I’ve ever interacted with in my life? In my mind, it’s rational to start with a question that almost always elicits the information I care about, even if there’s some small subset of the population that will force me to choose my words as if they’re being interpreted by a Monkey’s paw.
It didn’t work for the students in the study in the OP. That’s literally why the OP mentioned it!
It depends on what you mean by “didn’t work”. The study described is published in a paper only 16 pages long. We can just read it: http://web.mit.edu/curhan/www/docs/Articles/biases/67_J_Personality_and_Social_Psychology_366,_1994.pdf
First, consider the question of, “are these predictions totally useless?” This is an important question because I stand by my claim that the answer of “never” is actually totally useless due to how trivial it is.
Yep. Matches my experience.
We know that only 11% of students met their optimistic targets, and only 30% of students met their “best guess” targets. What about the pessimistic target? It turns out, 50% of the students did finish by that target. That’s not just a quirk, because it’s actually related to the distribution itself.
In other words, asking people for a best guess or an optimistic prediction results in a biased prediction that is almost always earlier than a real delivery date. On the other hand, while the pessimistic question is not more accurate (it has the same absolute error margins), it is unbiased. The reality is that the study says that people asked for a pessimistic question were equally likely to over-estimate their deadline as they were to under-estimate it. If you don’t think a question that gives you a distribution centered on the right answer is useful, I’m not sure what to tell you.
The paper actually did a number of experiments. That was just the first.
In the third experiment, the study tried to understand what people are thinking about when estimating.
This seems relevant considering that the idea of premortems or “worst case” questioning is to elicit impediments, and the project managers / engineering leads doing that questioning are intending to hear about impediments and will continue their questioning until they’ve been satisfied that the group is actually discussing that.
In the fourth experiment, the study tries to understand why it is that people don’t think about their past experiences. They discovered that just prompting people to consider past experiences was insufficient, they actually needed additional prompting to make their past experience “relevant” to their current task.
How does this compare to the first experiment?
It’s common in engineering to perform group estimates. Does the study look at that? Yep, the fifth and last experiment asks individuals to estimate the performance of others.
So observers are more pessimistic. Actually, observers are so pessimistic that you have to average it with the optimistic estimates to get an unbiased estimate.
At the end of the day, there are certain things that are known about scheduling / prediction.
In general, individuals are as wrong as they are right for any given estimate.
In general, people are overly optimistic.
But, estimates generally correlate well with actual duration—if an individual thinks something is longer in estimate than another task, it most likely is! This is why in SW sometimes estimation is not in units of time at all, but in a concept called “points”.
The larger and more nebulously scoped the task, the worse any estimates will be in absolute error.
The length of a time a task can take follows a distribution with a very long right tail—a task that takes way longer than expected can take an arbitrary amount of time, but the fastest time to complete a task is limited.
The best way to actually schedule or predict a project is to break it down into as many small component tasks as possible, identify dependencies between those tasks, and produce most likely, optimistic, and pessimistic estimates for each task, and then run a simulation for chain of dependencies to see what the expected project completion looks like. Use a Gantt chart. This is a boring answer because it’s the “learn project management” answer, and people will hate on it because
gesture vaguely to all of the projects that overrun their schedule
. There are many interesting reasons for why that happens and why I don’t think it’s a massive failure of rationality, but I’m not sure this comment is a good place to go into detail on that. The quick answer is that comical overrun of a schedule has less to do with an inability to create correct schedules from an engineering / evidence-based perspective, and much more to do with a bureaucratic or organizational refusal to accept an evidence-based schedule when a totally false but politically palatable “optimistic” schedule is preferred.I definitely agree that this is the way to get the most accurate prediction practically possible, and that organizational dysfunction often means this isn’t used, even when the organization would be better able to achieve its goals with an accurate prediction. But I also think that depending on the type of project, producing an accurate Gantt chart may take a substantial fraction of the effort (or even a substantial fraction of the wall-clock time) of finishing the entire project, or may not even be possible without already having some of the outputs of the processes earlier in the chart. These aren’t necessarily possible to eradicate, so the take-away, I think, is not to be overly optimistic about the possibility of getting accurate schedules, even when there are no ill intentions and all known techniques to make more accurate schedules are used.
It’s interesting that the median of the pessimistic expectations is about equal to the median of the actual results. The mean clearly wasn’t, as that discrepancy was literally the point of citing this statistic in the OP:
So the estimates were biased, but not median-biased (at least that’s what Wikipedia appears to say the terminology is). Less biased than other estimates, though. Of course this assumes we’re taking the answer to “how long would it take if everything went as poorly as it possibly could” and interpreting it as the answer to “how long will it actually take”, and if students were actually asked after the fact if everything went as poorly as it possibly could, I predict they would mostly say no. And treating the text “if everything went as poorly as it possibly could” as if it wasn’t even there is clearly wrong too, because they gave a different (more biased towards optimism) answer if it was omitted.
This specific question seems kind of hard to make use of from a first-person perspective. But I guess maybe as a third party one could ask for worst-possible estimates and then treat them as median-unbiased estimators of what will actually happen? Though I also don’t know if the median-unbiasedness is a happy accident. (It’s not just a happy accident, there’s something there, but I don’t know whether it would generalize to non-academic projects, projects executed by 3rd parties rather than oneself, money rather than time estimates, etc.)
I do still also think there’s a question of how motivated the students were to give accurate answers, although I’m not claiming that if properly motivated they would re-invent Murphyjitsu / the pre-mortem / etc. from whole cloth; they’d probably still need to already know about some technique like that and believe it could help get more accurate answers. But even if a technique like that is an available action, it sounds like a lot of work, only worth doing if the output has a lot of value (e.g. if one suspects a substantial chance of not finishing the thesis before it’s due, one might wish to figure out why so one could actively address some of the reasons).
I had already (weeks ago) approvingly cited and requested for my wife and my best friend to read that particular post, which I think puts it at 99.5th percentile or higher of LW posts in terms of my wanting its message to be understood and taken to heart, so I think I disagree with this comment about as strongly as is possible.
I simply missed the difference between “what could go wrong” and “you failed, what happened” while I was focusing on the difference between “what could go wrong” and “how long could it take if everything goes as poorly as possible”.
You’re right—“you failed, what happened” does create a mental frame that “what could go wrong” does not. I don’t think “how long could it take if everything goes as poorly as possible” creates any more useful of a frame than “you failed, what happened”. But it does, formally, request a number. I don’t think that number, itself, is good for anything. I’m not even convinced asking for that number is very effective for eliciting the “you failed, what happened” mindset. I definitely don’t think it’s more effective for that than just asking directly “you failed, what happened”.