Just someone wandering the internet. Someone smart, but not smart like all of you people. Someone silly and creative. Someone who wanders the internet to find cool, isolated areas like LessWrong.
The universe is so awesome and crazy and interesting and I can’t wait for when humanity is advanced enough to understand all of it. While you people figure out solutions to the various messes our species is in, (I would prefer for homo sapiens to still exist in 20 years) I’ll be standing by for emotional support because I’m nowhere near smart enough to be doing any of that actually important stuff. Remember to have good mental health while you’re saving the world.
Pronouns: he/him
Quick thought: If you have an aligned AI in a multipolar scenario, other AIs might threaten to cause S-risk in order to get said FAI to do stuff, or as blackmail. Therefore, we should make the FAI think of X-risk and S-risk as equally bad (even though S-risk is in reality terrifyingly worse), because that way other powerful AIs will simply use oblivion as a threat instead of astronomical suffering (as oblivion is much easier to bring about).
It is possible that an FAI would be able to do some sort of weird crazy acausal decision-theory trick to make itself act as if it doesn’t care about anything done in efforts to blackmail it or something like that. But this is just to make sure.