I note that one central premise is “none of these estimates is made using any special knowledge”. I found this to be a crucial and interesting claim for evaluating AGI x-risk arguments. I think this is actually true for many people making p(doom) estimates. But it is definitely not true of everyone.
It’s as though we’re asking people to predict a recession without bothering to ask if they have any education in economics, and how much time they’ve spent thinking about factors that lead to recessions.
What specialized knowledge is relevant? I suggest that it requires two types of expertise: how current AI works; and how a mind that could create an x-risk could work.
I listened to this expecting it to consist solely of the fallacy “if people disagree on risks dramatically we should act as though the risk is zero”.
It does pretty much draw that very wrong conclusion.
However, a lot of the other critique of various AI x-risk models, and how we aggregate them, is surprisingly good. I fully agree that just choosing reference classes from intuitions is worth nothing as a predictive model. Choosing them from careful analysis, however, gets you closer.
But in the end, having good gears-level models of how AGI would be built and how humans would succeed or fail at it is the qualification for predicting AGI x-risk, and there seem to be few human beings who’ve spent more than trivial time on the necessary constellation of questions. We should be asking for time-on-task along with each p(doom) estimate, and for time on which tasks.
I note that one central premise is “none of these estimates is made using any special knowledge”. I found this to be a crucial and interesting claim for evaluating AGI x-risk arguments. I think this is actually true for many people making p(doom) estimates. But it is definitely not true of everyone.
It’s as though we’re asking people to predict a recession without bothering to ask if they have any education in economics, and how much time they’ve spent thinking about factors that lead to recessions.
What specialized knowledge is relevant? I suggest that it requires two types of expertise: how current AI works; and how a mind that could create an x-risk could work.
I listened to this expecting it to consist solely of the fallacy “if people disagree on risks dramatically we should act as though the risk is zero”.
It does pretty much draw that very wrong conclusion.
However, a lot of the other critique of various AI x-risk models, and how we aggregate them, is surprisingly good. I fully agree that just choosing reference classes from intuitions is worth nothing as a predictive model. Choosing them from careful analysis, however, gets you closer.
But in the end, having good gears-level models of how AGI would be built and how humans would succeed or fail at it is the qualification for predicting AGI x-risk, and there seem to be few human beings who’ve spent more than trivial time on the necessary constellation of questions. We should be asking for time-on-task along with each p(doom) estimate, and for time on which tasks.