Putting smarter-than-human AI into the same class as the Rapture instead of the same class as, say, predictions for progress of space travel or energy or neuroscience, sounds to me suspiciously like reference class tennis. Your mind knows what it expects the answer to be, and picks a reference class accordingly. No doubt many of these experts did the same.
And so, once again, “distrust experts” ends up as “trust the invisible algorithm my brain just used or whatever argument I just made up, which of course isn’t going to go wrong the way those experts did”.
(The correct answer was to broaden confidence intervals in both/all directions.)
I do not believe that I was engaging in the reference class tennis. I tried hard to put AI into the same class as “predictions for progress of space travel or energy or neuroscience”, but it just didn’t fit. Space travel predictions (of the low-earth-orbit variety) slowly converged in the 40s and 50s with the development of rocket propulsion, ICBMs and later satellites. I am not familiar with the history of abundant energy predictions before and after the discovery of nuclear energy, maybe someone else is. Not sure what neuroscience predictions you are talking about, feel free to clarify.
I do not believe that I was engaging in the reference class tennis.
You weren’t, given the way Eliezer defines the term and the assumptions specified in your comment. I happen to disagree with you but your comment does not qualify as reference class tennis. Especially since you ended up assuming that the reference class is insufficiently populated to even be used unless people suggest things to include.
Putting smarter-than-human AI into the same class as the Rapture instead of the same class as, say, predictions for progress of space travel or energy or neuroscience, sounds to me suspiciously like reference class tennis. Your mind knows what it expects the answer to be, and picks a reference class accordingly. No doubt many of these experts did the same.
And so, once again, “distrust experts” ends up as “trust the invisible algorithm my brain just used or whatever argument I just made up, which of course isn’t going to go wrong the way those experts did”.
(The correct answer was to broaden confidence intervals in both/all directions.)
I do not believe that I was engaging in the reference class tennis. I tried hard to put AI into the same class as “predictions for progress of space travel or energy or neuroscience”, but it just didn’t fit. Space travel predictions (of the low-earth-orbit variety) slowly converged in the 40s and 50s with the development of rocket propulsion, ICBMs and later satellites. I am not familiar with the history of abundant energy predictions before and after the discovery of nuclear energy, maybe someone else is. Not sure what neuroscience predictions you are talking about, feel free to clarify.
You weren’t, given the way Eliezer defines the term and the assumptions specified in your comment. I happen to disagree with you but your comment does not qualify as reference class tennis. Especially since you ended up assuming that the reference class is insufficiently populated to even be used unless people suggest things to include.