If IA (intelligence augmentation) comes before AGI, we will need FIA—the IA equivalent of Friendliness theory. When a human self-modifies using IA, how do we ensure value stability?
To create FIA, we may need a full understanding of human intelligence—which, apart from gathering data we don’t yet have, may prove to be a hard problem. Because IA involves modifying existing human brains, it might be developed before anyone fully understands human intelligence. In addition, there is the problem of causing everyone who uses IA to use the FIA theory.
FIA is analogous in these ways to FAI. If you think IA is likely to exist before AGI, then uFAI and uFIA may be comparably dangerous (for instance, successful IA may jump-start AGI development by the intelligence-augmented humans).
Are there organizations, forums, etc. dedicated to building FIA the way SIAI etc. are dedicated to building FAI?
ETA: the standard usage may be “Intelligence Amplification”, still abbreviated as IA. The meaning is the same.
If IA (intelligence augmentation) comes before AGI, we will need FIA—the IA equivalent of Friendliness theory. When a human self-modifies using IA, how do we ensure value stability?
To create FIA, we may need a full understanding of human intelligence—which, apart from gathering data we don’t yet have, may prove to be a hard problem. Because IA involves modifying existing human brains, it might be developed before anyone fully understands human intelligence. In addition, there is the problem of causing everyone who uses IA to use the FIA theory.
FIA is analogous in these ways to FAI. If you think IA is likely to exist before AGI, then uFAI and uFIA may be comparably dangerous (for instance, successful IA may jump-start AGI development by the intelligence-augmented humans).
Are there organizations, forums, etc. dedicated to building FIA the way SIAI etc. are dedicated to building FAI?
ETA: the standard usage may be “Intelligence Amplification”, still abbreviated as IA. The meaning is the same.