If someone wants to establish probabilities, they should be more systematic, and, for example, use reference classes. It seems to me that there’s been little of this for AI risk arguments in the community, but more in the past few years.
Maybe reference classes are kinds of analogies, but more systematic and so less prone to motivated selection? If so, then it seems hard to forecast without “analogies” of some kind. Still, reference classes are better. On the other hand, even with reference classes, we have the problem of deciding which reference class to use or how to weigh them or make other adjustments, and that can still be subject to motivated reasoning in the same way.
If someone wants to establish probabilities, they should be more systematic, and, for example, use reference classes. It seems to me that there’s been little of this for AI risk arguments in the community, but more in the past few years.
Maybe reference classes are kinds of analogies, but more systematic and so less prone to motivated selection? If so, then it seems hard to forecast without “analogies” of some kind. Still, reference classes are better. On the other hand, even with reference classes, we have the problem of deciding which reference class to use or how to weigh them or make other adjustments, and that can still be subject to motivated reasoning in the same way.
We can try to be systematic about our search and consideration of reference classes, and make estimates across a range of reference classes or weights to them. Do sensitivity analysis. Zach Freitas-Groff seems to have done something like this in AGI Catastrophe and Takeover: Some Reference Class-Based Priors, for which he won a prize from Open Phil’s AI Worldviews Contest.
Of course, we don’t need to use direct reference classes for AI risk or AI misalignment. We can break the problem down.