I find myself conflicted about this. I want to preserve my human condition, and I want to give it up. It’s familiar, but it’s trying. I want the best of both worlds; the ability to challenge myself against real hardships and succeed, but also the ability to avoid the greatest hardships that I can’t overcome on my own. The paradox is that solving the actual hardships like aging and death will require sufficient power to make enjoyable hardships (solving puzzles, playing sports and other games, achieving orgasm, etc.) trivial.
I think that one viable approach is to essentially live vicariously through our offspring. I find it enjoyable watching children solve problems that are difficult for them but are now trivial for me, and I think that the desire to teach skills and to appreciate the success of (for lack of a better word) less advanced people learning how to solve the same problems that I’ve solved could provide a very long sequence of Fun in the universe. Pre-singularity humans already essentially do this. Grandparents still enjoy life despite having solved virtually all of the trivial problems (and facing imminent big problems), and I think I’d be fine being an eternal grandparent to new humans or other forms of life. I can’t extrapolate that beyond the singularity, but it makes sense that if we intend to preserve our current values we will need someone to be in the situation where those values still matter, and if we can’t experience those situations ourselves then the offspring we care about are a good substitute. Morality issues of creating children may be an issue.
Another solution is a walled garden run by FAI that preserves the trivial problems humans like solving while but solves the big problems. This has a stronger possibility for value drift and I think people would value life a bit less if they knew it was ultimately a video game.
It’s also possible that upon reflection we’ll realize that our current values also let us care about hive-minds in the same way we care about our friends and family now. We would be different, alien to present selves, but with the ability to trace our values back to our present state and see that at no point did we sacrifice them for expediency or abandon them for their triviality. This seems like the least probable solution simply because our values are not special, they arose in our ancestral environment because they worked. That we enjoy them is an accident, and that they could fully encompass the post-singularity world seems a bit miraculous.
As a kid I always wondered about this in the context of religious heaven. What could a bunch of former humans possibly do for eternity that wouldn’t become terribly boring or involve complete loss of humanity? I could never answer that question, so perhaps it’s an {AI,god}-hard problem to coherently extrapolate human values.
What’s wrong with hive minds? As long as my ‘soul’ survives, I wouldn’t mind being part of some gigantic consciousness.
Also, another thought- it may take an AI to solve philosophy and the nature of the universe, but it may not be far beyond the capacity of the human brain to understand it.
What’s wrong with hive minds? As long as my ‘soul’ survives, I wouldn’t mind being part of some gigantic consciousness.
A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...
I find myself conflicted about this. I want to preserve my human condition, and I want to give it up. It’s familiar, but it’s trying. I want the best of both worlds; the ability to challenge myself against real hardships and succeed, but also the ability to avoid the greatest hardships that I can’t overcome on my own. The paradox is that solving the actual hardships like aging and death will require sufficient power to make enjoyable hardships (solving puzzles, playing sports and other games, achieving orgasm, etc.) trivial.
I think that one viable approach is to essentially live vicariously through our offspring. I find it enjoyable watching children solve problems that are difficult for them but are now trivial for me, and I think that the desire to teach skills and to appreciate the success of (for lack of a better word) less advanced people learning how to solve the same problems that I’ve solved could provide a very long sequence of Fun in the universe. Pre-singularity humans already essentially do this. Grandparents still enjoy life despite having solved virtually all of the trivial problems (and facing imminent big problems), and I think I’d be fine being an eternal grandparent to new humans or other forms of life. I can’t extrapolate that beyond the singularity, but it makes sense that if we intend to preserve our current values we will need someone to be in the situation where those values still matter, and if we can’t experience those situations ourselves then the offspring we care about are a good substitute. Morality issues of creating children may be an issue.
Another solution is a walled garden run by FAI that preserves the trivial problems humans like solving while but solves the big problems. This has a stronger possibility for value drift and I think people would value life a bit less if they knew it was ultimately a video game.
It’s also possible that upon reflection we’ll realize that our current values also let us care about hive-minds in the same way we care about our friends and family now. We would be different, alien to present selves, but with the ability to trace our values back to our present state and see that at no point did we sacrifice them for expediency or abandon them for their triviality. This seems like the least probable solution simply because our values are not special, they arose in our ancestral environment because they worked. That we enjoy them is an accident, and that they could fully encompass the post-singularity world seems a bit miraculous.
As a kid I always wondered about this in the context of religious heaven. What could a bunch of former humans possibly do for eternity that wouldn’t become terribly boring or involve complete loss of humanity? I could never answer that question, so perhaps it’s an {AI,god}-hard problem to coherently extrapolate human values.
What’s wrong with hive minds? As long as my ‘soul’ survives, I wouldn’t mind being part of some gigantic consciousness.
Also, another thought- it may take an AI to solve philosophy and the nature of the universe, but it may not be far beyond the capacity of the human brain to understand it.
I appreciate the long response.
A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...