But in a field like AI prediction, where experts lack feed back for their pronouncements, we should expect them to perform poorly, and for biases to dominate their thinking.
And that’s pretty much the key sentence.
There is little difference between experts and non-experts.
No, they’re completely different. Taw’s said that there are no people in a certain class; Stuart_Armstrong said that there is strong evidence that there are no people in a certain class.
Actually, what Stuart_Armstrong said was that we have shown certain classes of people (that we thought might be experts) are not, as a class, experts. The strong evidence is that we have not yet found a way to distinguish the class of experts. Which is, in my opinion, weak to moderate evidence that the class does not exist, not strong evidence. When it comes to trying to evaluate predictions on their own terms (because you’re curious about planning for your future life, for instance) the two statements are similar. In other cases (for example, trying to improve the state of the art of AI predictions, or predictions of the strongly unknown more generally), the two statements are meaningfully different.
And that’s pretty much the key sentence.
Except there’s no such thing as AGI expert.
There are classes of individuals that might be plausibly effective at predicting AGI—but this now appears to not be the case.
So, what taw said.
No, they’re completely different. Taw’s said that there are no people in a certain class; Stuart_Armstrong said that there is strong evidence that there are no people in a certain class.
Actually, what Stuart_Armstrong said was that we have shown certain classes of people (that we thought might be experts) are not, as a class, experts. The strong evidence is that we have not yet found a way to distinguish the class of experts. Which is, in my opinion, weak to moderate evidence that the class does not exist, not strong evidence. When it comes to trying to evaluate predictions on their own terms (because you’re curious about planning for your future life, for instance) the two statements are similar. In other cases (for example, trying to improve the state of the art of AI predictions, or predictions of the strongly unknown more generally), the two statements are meaningfully different.