Once again, Bayesian reasoning comes to the rescue. The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.
However a reminder to be careful and objective about the probability one might assign to a new bit of data (Inside view data is not privileged over outside view data! And it might be really bad!) is helpful.
The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.
I’d like to be able to say that, but there actually is research showing how human beings get more optimistic about their Christmas shopping estimates as they try to visualize the details of when, where, and how.
Your statement is certainly true of an ideal rational agent, but it may not be carried in human practice.
...updating based on new data (ignore the inside view!)...
…human beings get more optimistic… as they try to visualize the details of when, where, and how.
Are updating based on new data and updating based on introspection equivalent? If not, then LongInTheTooth equivocated by calling ignoring the inside view a failure to update based on new data. But maybe they are equivalent under a non-logical-omniscience view of updating, and it’s necessary to factor in meta-information about the quality and reliability of the introspection.
“But maybe they are equivalent under a non-logical-omniscience view of updating, and it’s necessary to factor in meta-information about the quality and reliability of the introspection.”
Yes, that is what I was thinking in a wishy-washy intuitive way, rather than an explicit and clearly stated way, as you have helpfully provided.
The act of visualizing the future and planning how long a task will take based on guesses about how long the subtasks will take, I would call generating new data which one might use to update a probability of finishing the task on a specific date. (FogBugz Evidence Based Scheduling does exactly this, although with Monte Carlo simulation, rather than Bayesian math)
But research shows that when doing this exercise for homework assignments and Christmas shopping (and, incidentally, software projects), the data is terrible. Good point! Don’t lend much weight to this data for those projects.
I see Eliezer saying that sometimes the internally generated data isn’t bad after all.
So, applying a Bayesian perspective, the answer is: Be aware of your biases for internally generated data (inside view), and update accordingly.
And generalizing from my own experience, I would say, “Good luck with that!”
Once again, Bayesian reasoning comes to the rescue. The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.
However a reminder to be careful and objective about the probability one might assign to a new bit of data (Inside view data is not privileged over outside view data! And it might be really bad!) is helpful.
I’d like to be able to say that, but there actually is research showing how human beings get more optimistic about their Christmas shopping estimates as they try to visualize the details of when, where, and how.
Your statement is certainly true of an ideal rational agent, but it may not be carried in human practice.
Are updating based on new data and updating based on introspection equivalent? If not, then LongInTheTooth equivocated by calling ignoring the inside view a failure to update based on new data. But maybe they are equivalent under a non-logical-omniscience view of updating, and it’s necessary to factor in meta-information about the quality and reliability of the introspection.
“But maybe they are equivalent under a non-logical-omniscience view of updating, and it’s necessary to factor in meta-information about the quality and reliability of the introspection.”
Yes, that is what I was thinking in a wishy-washy intuitive way, rather than an explicit and clearly stated way, as you have helpfully provided.
The act of visualizing the future and planning how long a task will take based on guesses about how long the subtasks will take, I would call generating new data which one might use to update a probability of finishing the task on a specific date. (FogBugz Evidence Based Scheduling does exactly this, although with Monte Carlo simulation, rather than Bayesian math)
But research shows that when doing this exercise for homework assignments and Christmas shopping (and, incidentally, software projects), the data is terrible. Good point! Don’t lend much weight to this data for those projects.
I see Eliezer saying that sometimes the internally generated data isn’t bad after all.
So, applying a Bayesian perspective, the answer is: Be aware of your biases for internally generated data (inside view), and update accordingly.
And generalizing from my own experience, I would say, “Good luck with that!”