“Imago FAI” is a serendipitous coinage. It sounds like what I had in mind here, when I talked about the mature form of a friendly “AI” being like a ubiquitous meme rather than a great brain in space. If a civilization has widely available knowledge and technology that’s dangerous (because it can make WMDs or UFAIs), then any “intelligence” with access to dangerous power, needs to possess the traits we would call “friendly”, if they were found in a developing AI. Or at least, empowered elements of the civilization must not have the potential or the tendency to start overturning the core values of the civilization (values which need not be friendly by human standards, for this to be a condition of the civilization’s stability and survival). It implies that access to technological power in such a civilization must come at the price of consenting to whatever form of mental monitoring and debugging is employed to enforce the analogue of friendliness.
“Imago FAI” is a serendipitous coinage. It sounds like what I had in mind here, when I talked about the mature form of a friendly “AI” being like a ubiquitous meme rather than a great brain in space. If a civilization has widely available knowledge and technology that’s dangerous (because it can make WMDs or UFAIs), then any “intelligence” with access to dangerous power, needs to possess the traits we would call “friendly”, if they were found in a developing AI. Or at least, empowered elements of the civilization must not have the potential or the tendency to start overturning the core values of the civilization (values which need not be friendly by human standards, for this to be a condition of the civilization’s stability and survival). It implies that access to technological power in such a civilization must come at the price of consenting to whatever form of mental monitoring and debugging is employed to enforce the analogue of friendliness.