Fair enough. Whatever the actual timeline is, name a number of years before the inevitable AI catastrophe that humans are not going to solve. Your post here suggests it’s at least less than thirty. If that time passes and there is no AI catastrophe, your confidence that you understand AI risk should drastically go down. (And no “well, wait a few more years after the number I originally gave you” either.)
And if you can’t name a number of years, that should heavily discount how urgent everyone should treat it as, because you are in a situation where exaggerating its urgency has no risk to you.
Hmm… I definitely think people should update if a long time passes and what Eliezer says doesn’t come to pass.
On the other hand, I don’t know how I feel about “reputational staking”. Part of the problem is that in order to have a bet you need a counterparty. If Eliezer does stake his reputation then perhaps a few people would become marginally more concerned about safety, but if later he was to recant, that’ll probably affect a lot more people. So it’s unclear whether there’s a viable bet here.
That’s why I didn’t call this a bet. (I also didn’t demand he put any money on it, something which rationalists sometimes like and which has its own problems).
The thing about having a counterparty is that this is already asymmetrical. Eliezer is making a dramatic, catastrophic, prediction. If he turns out to be correct, then of course I’ll be proven wrong. I won’t have any other choice but to admit that I’m wrong, as we’re all herded into shredders so our bodies can be used to make paperclips.
But can Eliezer be proven wrong? No, not if he makes it vague about exactly how long we need to wait, and if he leaves open the possibility of “oh, I said 5 years? I meant 7. No, I meant 9....”
And if he can’t be proven wrong, he has no incentive not to exaggerate the danger. The way it should work that the more catastrophic your prediction is, the worse you look when you fail, so you’re not going to exaggerate the danger just to get people to listen to you.
Fair enough. Whatever the actual timeline is, name a number of years before the inevitable AI catastrophe that humans are not going to solve. Your post here suggests it’s at least less than thirty. If that time passes and there is no AI catastrophe, your confidence that you understand AI risk should drastically go down. (And no “well, wait a few more years after the number I originally gave you” either.)
And if you can’t name a number of years, that should heavily discount how urgent everyone should treat it as, because you are in a situation where exaggerating its urgency has no risk to you.
If twenty years from now you and I are both are still alive and free, I will happily say “You were right Jiro, I was wrong.”
Eliezer is the person who made the prediction to an audience and who is being taken as an authority, not you.
Hmm… I definitely think people should update if a long time passes and what Eliezer says doesn’t come to pass.
On the other hand, I don’t know how I feel about “reputational staking”. Part of the problem is that in order to have a bet you need a counterparty. If Eliezer does stake his reputation then perhaps a few people would become marginally more concerned about safety, but if later he was to recant, that’ll probably affect a lot more people. So it’s unclear whether there’s a viable bet here.
That’s why I didn’t call this a bet. (I also didn’t demand he put any money on it, something which rationalists sometimes like and which has its own problems).
The thing about having a counterparty is that this is already asymmetrical. Eliezer is making a dramatic, catastrophic, prediction. If he turns out to be correct, then of course I’ll be proven wrong. I won’t have any other choice but to admit that I’m wrong, as we’re all herded into shredders so our bodies can be used to make paperclips.
But can Eliezer be proven wrong? No, not if he makes it vague about exactly how long we need to wait, and if he leaves open the possibility of “oh, I said 5 years? I meant 7. No, I meant 9....”
And if he can’t be proven wrong, he has no incentive not to exaggerate the danger. The way it should work that the more catastrophic your prediction is, the worse you look when you fail, so you’re not going to exaggerate the danger just to get people to listen to you.
Agreed. The current statements are so vague as to be impossible to falsify, and hence, they are not scientific statements.