But evidence of the “just believe me I’m special” variety won’t.
Oh come on, no-one is arguing that this should convince you. Obviously the outside view is the correct position in the absence of inside view evidence, no-one disputes that. The dispute is this: some people seem to believe that any effort to look into the details of the claim rather than taking the outside view is simply a self-serving effort to avoid the conclusions the outside view brings you to. Taken to extremes this leads to the position I’m mocking. If you agree that inside view evidence is worth examining, then we’re on the same side in this discussion.
If you agree that inside view evidence is worth examining, then we’re on the same side in this discussion.
Isn’t the whole point of the outside view, as laid out in Eliezer’s original post, that sometimes you can get a better prediction by deliberately ignoring relevant inside view evidence? We need an algorithm to determine which inside view evidence to ignore, and the optimal algorithm clearly can’t be either “all” or “none”.
My instinct is to be conservative in how much inside view evidence to ignore. That is, only adopt the outside view in circumstances substantially similar to one of the experiments showing an advantage for the outside view.
In the case of cousin_it’s claim about Dennett not having solved the problem of consciousness, he seems to be saying that we should ignore the evidence that is constituted by the words in Dennett’s book, but take into account personal information about Dennett as a philosopher. I don’t see how this position is supportable by the empirical evidence.
Isn’t the whole point of the outside view, as laid out in Eliezer’s original post, that sometimes you can get a better prediction by deliberately ignoring relevant inside view evidence? We need an algorithm to determine which inside view evidence to ignore, and the optimal algorithm clearly can’t be either “all” or “none”.
This is exactly the right way to state it. The question is, when is it better to ignore evidence? More precisely, when is it better for a human to ignore evidence? What, if any, biases and limitations of the human mind make inside-view reasoning dangerous?
This is an empirical question, to be settled by a study of human cognition. It’s not an abstract epistemological question that can be settled by arm-chair reasoning.
This is exactly the right way to state it. The question is, when is it better to ignore evidence? More precisely, when is it better for a human to ignore evidence? What, if any, biases and limitations of the human mind make inside-view reasoning dangerous?
I failed to convey an idea of my answer in “Consider representative data sets”. Prototypes that come to mind in planning are not representative (they are about efficient if-all-goes-well plans and not their real-world outcomes), and so should either be complemented by more concepts to make up representative data sets (which doesn’t work in practice), or forcefully excluded from consideration. The deeper the inside view the better, unless you are arriving at an answer intuitively under the conditions of predictably tilted availability.
Oh come on, no-one is arguing that this should convince you. Obviously the outside view is the correct position in the absence of inside view evidence, no-one disputes that. The dispute is this: some people seem to believe that any effort to look into the details of the claim rather than taking the outside view is simply a self-serving effort to avoid the conclusions the outside view brings you to. Taken to extremes this leads to the position I’m mocking. If you agree that inside view evidence is worth examining, then we’re on the same side in this discussion.
Isn’t the whole point of the outside view, as laid out in Eliezer’s original post, that sometimes you can get a better prediction by deliberately ignoring relevant inside view evidence? We need an algorithm to determine which inside view evidence to ignore, and the optimal algorithm clearly can’t be either “all” or “none”.
My instinct is to be conservative in how much inside view evidence to ignore. That is, only adopt the outside view in circumstances substantially similar to one of the experiments showing an advantage for the outside view.
In the case of cousin_it’s claim about Dennett not having solved the problem of consciousness, he seems to be saying that we should ignore the evidence that is constituted by the words in Dennett’s book, but take into account personal information about Dennett as a philosopher. I don’t see how this position is supportable by the empirical evidence.
This is exactly the right way to state it. The question is, when is it better to ignore evidence? More precisely, when is it better for a human to ignore evidence? What, if any, biases and limitations of the human mind make inside-view reasoning dangerous?
This is an empirical question, to be settled by a study of human cognition. It’s not an abstract epistemological question that can be settled by arm-chair reasoning.
I failed to convey an idea of my answer in “Consider representative data sets”. Prototypes that come to mind in planning are not representative (they are about efficient if-all-goes-well plans and not their real-world outcomes), and so should either be complemented by more concepts to make up representative data sets (which doesn’t work in practice), or forcefully excluded from consideration. The deeper the inside view the better, unless you are arriving at an answer intuitively under the conditions of predictably tilted availability.