Yes, I very much agree with everything you wrote. (I agree so much I added you as a friend.)
Frankly, the big issue is our own mental health,
Absolutely! I tend to describe my concerns with our mental health as fear about ‘consistency’ in our values, but I prefer the associations of the former. For example, suggesting our brains are playing a more active role in shifting and contorting values.
This drains from us (from me at least) some of the willpower to inject these values into those future self-modifying descendants.
For me, since assimilating the belief that there is no objective value, I’ve lost interest in the far future. I suppose before I felt as though we might fare well or fare poorly when measured against the ultimate morality of the universe, but either way, we would have a role to play as the good guys or the bad guys and it would be interesting. I read you as being more concerned that we will do the wrong thing—that we will subject a new race of people to our haphazard values. Did I read this correctly? At first I think optimistically they they would be smarter and so they certainly could fix themselves. But then I kind of remember that contradictory values can make you miserable no matter how smart you are. (I’m not predicting anything about what will happen with CEV or AI, my response just referred to some unspecified, non-optimal state where we are smarter but not necessarily equipped with saner values.)
What if “our wish if we knew more, thought faster, were more the people we wished we were” is to die?
Possibly. And continuing with the mental health picture, it’s possible that elements of our psyche covertly crave death as freedom from struggle. But it seems to me that an unfettered mind would just be apathetic. Like a network of muscles with the bones removed.
Yes, I very much agree with everything you wrote. (I agree so much I added you as a friend.)
Absolutely! I tend to describe my concerns with our mental health as fear about ‘consistency’ in our values, but I prefer the associations of the former. For example, suggesting our brains are playing a more active role in shifting and contorting values.
For me, since assimilating the belief that there is no objective value, I’ve lost interest in the far future. I suppose before I felt as though we might fare well or fare poorly when measured against the ultimate morality of the universe, but either way, we would have a role to play as the good guys or the bad guys and it would be interesting. I read you as being more concerned that we will do the wrong thing—that we will subject a new race of people to our haphazard values. Did I read this correctly? At first I think optimistically they they would be smarter and so they certainly could fix themselves. But then I kind of remember that contradictory values can make you miserable no matter how smart you are. (I’m not predicting anything about what will happen with CEV or AI, my response just referred to some unspecified, non-optimal state where we are smarter but not necessarily equipped with saner values.)
Possibly. And continuing with the mental health picture, it’s possible that elements of our psyche covertly crave death as freedom from struggle. But it seems to me that an unfettered mind would just be apathetic. Like a network of muscles with the bones removed.