Making highly visible predictions about AGI timelines as a safety figure is a lose-lose situation. If you’re right, you will all be dead, so it won’t matter. If you’re wrong, bad people who don’t make any predictions will use yours to tar you as a kook. Then everyone will stop listening to you, and AGI will come five years later and you’ll all be dead.
I’m not saying he shouldn’t shut up about the metaculus updates, but he’s in a bit of a bind here. And as you noticed, he has in fact made a substantial prediction via his bet with Caplan. The reason he doesn’t do much else is because (in my model of Eliezer) the kinds of people who are likely to take heed of his bets are more likely to be intellectually honest.
I don’t like this defense for two reasons. One, I don’t se why the same argument doesn’t apply to the role Eliezer has already adopted as an early and insistent voice of concern. Being deliberately vague on some types of predictions doesn’t change the fact that his name is synonymous with AI doomsaying. Second, we’re talking about a person whose whole brand is built around intellectual transparency and reflection; if Eliezer’s predictive model of AI development contains relevant deficiencies, I wish to believe that Eliezer’s predictive model of AI development contains relevant deficiencies. I recognize the incentives may well be aligned against him here, but it’s frustrating that he seems to want to be taken seriously on the topic but isn’t obviously equally open to being rebutted in good faith.
If you’re right, you will all be dead, so it won’t matter
Posting a concrete forecast might motivate some people to switch into working on the problem, work harder on it, or reduce work that increases risk (e.g. capabilities work). This might then make the forecast less accurate, but that seems like a small price to pay vs everyone being dead. (And you could always update in response to people’s response).
Making highly visible predictions about AGI timelines as a safety figure is a lose-lose situation. If you’re right, you will all be dead, so it won’t matter. If you’re wrong, bad people who don’t make any predictions will use yours to tar you as a kook. Then everyone will stop listening to you, and AGI will come five years later and you’ll all be dead.
I’m not saying he shouldn’t shut up about the metaculus updates, but he’s in a bit of a bind here. And as you noticed, he has in fact made a substantial prediction via his bet with Caplan. The reason he doesn’t do much else is because (in my model of Eliezer) the kinds of people who are likely to take heed of his bets are more likely to be intellectually honest.
I don’t like this defense for two reasons. One, I don’t se why the same argument doesn’t apply to the role Eliezer has already adopted as an early and insistent voice of concern. Being deliberately vague on some types of predictions doesn’t change the fact that his name is synonymous with AI doomsaying. Second, we’re talking about a person whose whole brand is built around intellectual transparency and reflection; if Eliezer’s predictive model of AI development contains relevant deficiencies, I wish to believe that Eliezer’s predictive model of AI development contains relevant deficiencies. I recognize the incentives may well be aligned against him here, but it’s frustrating that he seems to want to be taken seriously on the topic but isn’t obviously equally open to being rebutted in good faith.
Posting a concrete forecast might motivate some people to switch into working on the problem, work harder on it, or reduce work that increases risk (e.g. capabilities work). This might then make the forecast less accurate, but that seems like a small price to pay vs everyone being dead. (And you could always update in response to people’s response).