But the cryo people aren’t prescriptive about what imago FAI looks like, that’s the point. They’ll give you more life, but they won’t tell you how to live it. Whereas religion doesn’t change your material circumstances but is very emphatic about how you should live with them.
“Imago FAI” is a serendipitous coinage. It sounds like what I had in mind here, when I talked about the mature form of a friendly “AI” being like a ubiquitous meme rather than a great brain in space. If a civilization has widely available knowledge and technology that’s dangerous (because it can make WMDs or UFAIs), then any “intelligence” with access to dangerous power, needs to possess the traits we would call “friendly”, if they were found in a developing AI. Or at least, empowered elements of the civilization must not have the potential or the tendency to start overturning the core values of the civilization (values which need not be friendly by human standards, for this to be a condition of the civilization’s stability and survival). It implies that access to technological power in such a civilization must come at the price of consenting to whatever form of mental monitoring and debugging is employed to enforce the analogue of friendliness.
Cryonics itself makes no moral prescriptions. You can consider it as a type of burial ritual.
But rituals are not performed in isolation, they are performed in the context of religions (or religious-like ideologies, if you prefer) that do make moral prescriptions.
Cryonics typically comes in the transhumanist/singularitarian ideological package, which has a moral content.
Brain upload? Imago FAI? Come on, it’s the same sort of stuff, just with supernatural miracles replaced by technological ones.
But the cryo people aren’t prescriptive about what imago FAI looks like, that’s the point. They’ll give you more life, but they won’t tell you how to live it. Whereas religion doesn’t change your material circumstances but is very emphatic about how you should live with them.
“Imago FAI” is a serendipitous coinage. It sounds like what I had in mind here, when I talked about the mature form of a friendly “AI” being like a ubiquitous meme rather than a great brain in space. If a civilization has widely available knowledge and technology that’s dangerous (because it can make WMDs or UFAIs), then any “intelligence” with access to dangerous power, needs to possess the traits we would call “friendly”, if they were found in a developing AI. Or at least, empowered elements of the civilization must not have the potential or the tendency to start overturning the core values of the civilization (values which need not be friendly by human standards, for this to be a condition of the civilization’s stability and survival). It implies that access to technological power in such a civilization must come at the price of consenting to whatever form of mental monitoring and debugging is employed to enforce the analogue of friendliness.
Cryonics itself makes no moral prescriptions. You can consider it as a type of burial ritual.
But rituals are not performed in isolation, they are performed in the context of religions (or religious-like ideologies, if you prefer) that do make moral prescriptions.
Cryonics typically comes in the transhumanist/singularitarian ideological package, which has a moral content.