I think it’s appropriate to draw some better lines through concept space for apocalyptic predictions, when determining a base rate, than just “here’s an apocalyptic prediction and a date.” They aren’t all created equal.
Herbert W Armstrong is on this list 4 times… each time with a new incorrect prediction. So you’re counting this guy who took 4 guesses, all wrong, as 4 independent samples on which we should form a base rate.
And by using this guy in the base rate, you’re implying Eliezer’s prediction is in the same general class as Armstrong’s, which is a stretch to say the least.
A pretty simple class distinction is: how accurate are other predictions the person has made? How has Eliezer’s prediction record been? How have his AI timeline predictions been?
I don’t know the answers to these questions, maybe they really have been bad, but I’m assuming they’re pretty good. If that’s the case, then clearly Eliezer’s prediction doesn’t deserve to classified with the predictions listed on that page.
I think it’s appropriate to draw some better lines through concept space for apocalyptic predictions, when determining a base rate, than just “here’s an apocalyptic prediction and a date.” They aren’t all created equal.
Herbert W Armstrong is on this list 4 times… each time with a new incorrect prediction. So you’re counting this guy who took 4 guesses, all wrong, as 4 independent samples on which we should form a base rate.
And by using this guy in the base rate, you’re implying Eliezer’s prediction is in the same general class as Armstrong’s, which is a stretch to say the least.
A pretty simple class distinction is: how accurate are other predictions the person has made? How has Eliezer’s prediction record been? How have his AI timeline predictions been?
I don’t know the answers to these questions, maybe they really have been bad, but I’m assuming they’re pretty good. If that’s the case, then clearly Eliezer’s prediction doesn’t deserve to classified with the predictions listed on that page.