I think that some organization should be seriously planning how to leverage possible uploading or intelligence improvement technologies for building FAI (e.g. try to be the first to run an accelerated uploaded FAI research team; or develop better textbooks on FAI theory that improve the chances that future people, uploaded or with improved intelligence, get it right), and tracking new information that impacts such plans. Perhaps SIAI should form a team that works on that, or create another organization with that mission (right now it doesn’t look like a top priority, but it could be, so this deserves some thought; it will be a clear priority in another 20 years, as relevant technologies move closer, but by then it might be too late to do some specific important-in-retrospect thing).
What do you think about the idea that people who are currently interested in doing FAI research limit their attention to topics that are not “dual use” (i.e., equally applicable to building UFAI)? For example, metaethics and metaphilosophy seem “single use”, whereas logical uncertainty, and to a lesser extent, decision theory, seem “dual use”. Of course we could work on dual use topics and try to keep the results secret, but it seems unlikely that we’d be good enough at keeping secrets that such work won’t leak out pretty quickly.
I think that some organization should be seriously planning how to leverage possible uploading or intelligence improvement technologies for building FAI (e.g. try to be the first to run an accelerated uploaded FAI research team; or develop better textbooks on FAI theory that improve the chances that future people, uploaded or with improved intelligence, get it right), and tracking new information that impacts such plans. Perhaps SIAI should form a team that works on that, or create another organization with that mission (right now it doesn’t look like a top priority, but it could be, so this deserves some thought; it will be a clear priority in another 20 years, as relevant technologies move closer, but by then it might be too late to do some specific important-in-retrospect thing).
What do you think about the idea that people who are currently interested in doing FAI research limit their attention to topics that are not “dual use” (i.e., equally applicable to building UFAI)? For example, metaethics and metaphilosophy seem “single use”, whereas logical uncertainty, and to a lesser extent, decision theory, seem “dual use”. Of course we could work on dual use topics and try to keep the results secret, but it seems unlikely that we’d be good enough at keeping secrets that such work won’t leak out pretty quickly.