“But maybe they are equivalent under a non-logical-omniscience view of updating, and it’s necessary to factor in meta-information about the quality and reliability of the introspection.”
Yes, that is what I was thinking in a wishy-washy intuitive way, rather than an explicit and clearly stated way, as you have helpfully provided.
The act of visualizing the future and planning how long a task will take based on guesses about how long the subtasks will take, I would call generating new data which one might use to update a probability of finishing the task on a specific date. (FogBugz Evidence Based Scheduling does exactly this, although with Monte Carlo simulation, rather than Bayesian math)
But research shows that when doing this exercise for homework assignments and Christmas shopping (and, incidentally, software projects), the data is terrible. Good point! Don’t lend much weight to this data for those projects.
I see Eliezer saying that sometimes the internally generated data isn’t bad after all.
So, applying a Bayesian perspective, the answer is: Be aware of your biases for internally generated data (inside view), and update accordingly.
And generalizing from my own experience, I would say, “Good luck with that!”
“But maybe they are equivalent under a non-logical-omniscience view of updating, and it’s necessary to factor in meta-information about the quality and reliability of the introspection.”
Yes, that is what I was thinking in a wishy-washy intuitive way, rather than an explicit and clearly stated way, as you have helpfully provided.
The act of visualizing the future and planning how long a task will take based on guesses about how long the subtasks will take, I would call generating new data which one might use to update a probability of finishing the task on a specific date. (FogBugz Evidence Based Scheduling does exactly this, although with Monte Carlo simulation, rather than Bayesian math)
But research shows that when doing this exercise for homework assignments and Christmas shopping (and, incidentally, software projects), the data is terrible. Good point! Don’t lend much weight to this data for those projects.
I see Eliezer saying that sometimes the internally generated data isn’t bad after all.
So, applying a Bayesian perspective, the answer is: Be aware of your biases for internally generated data (inside view), and update accordingly.
And generalizing from my own experience, I would say, “Good luck with that!”