(Kneejerk response: If only we could engineer some kind of intelligence that could analyze the potentially long tail of x-risk, or could prudentially decide how to make trade offs between that and other ways of reducing x-risk, or could prudentially reconsider all the considerations that went into focusing on x-risk in the first place instead of some other focus of moral significance, or...)
Yes, one of the nice features of FAI is that success there helps immensely with all other x-risks. However, it’s an open question whether creating FAI is possible before other x-risks become critical.
That is, the kneejerk response has the same template as saying, “if only we could engineer cold fusion, our other energy worries would be moot, so clearly we should devote most of the energy budget to cold fusion research”. Some such arguments carry through on expected utility, while others don’t; so I actually need to sit down and do my best reckoning.
(Kneejerk response: If only we could engineer some kind of intelligence that could analyze the potentially long tail of x-risk, or could prudentially decide how to make trade offs between that and other ways of reducing x-risk, or could prudentially reconsider all the considerations that went into focusing on x-risk in the first place instead of some other focus of moral significance, or...)
Yes, one of the nice features of FAI is that success there helps immensely with all other x-risks. However, it’s an open question whether creating FAI is possible before other x-risks become critical.
That is, the kneejerk response has the same template as saying, “if only we could engineer cold fusion, our other energy worries would be moot, so clearly we should devote most of the energy budget to cold fusion research”. Some such arguments carry through on expected utility, while others don’t; so I actually need to sit down and do my best reckoning.