This is known as the orthogonality thesis, that intelligence and rationality don’t dictate your values. I don’t have time right now to explain the whole thing but it’s talked about extensively in the sequences if you want to read more. I think it’s pretty widely accepted around here as well.
My “intuition pump” is to imagine a superintelligent gigantic spider. Not some alien with human values in a spider body, but actual spider that was ‘magically’ increased and given IQ 500.
I think it scales, and applies to any type of intelligence. It doesn’t seem that more intelligent humans are particularly more altruistic (though they tend to be richer, so less obvious about their motivations). There’s no reason (that I see) to think that even further intelligence would make humans more likely to care about the less-intelligent groups more (or less) than they do now.
The orthogonality thesis is usually used with AI, because that topic is where it actually matters, but the overarching idea applies to any mind. Making something smarter does not give it morals.
And no, I bet that the psychopaths would use their newfound powers to blend in and manipulate people better. Overt crime would drop, and subtler harm would go up. That’s what happens in the real world across the real intelligence gradient.
I’m not a sociopath, but I was a sociopath-lite before transitioning (minimal emotion, sadistic streak, almost no empathy). I once sat and listened to my girlfriend pour her heart out in extreme emotional pain and I just did not care. I wanted her to shut up and let me get back to my game. She was annoying.
Telling 2016!raven to reason her way into morals is like if I told you to reason your way into seeing gamma rays. It’s just not gonna happen. Sure, you can approximate it, but that’s not the same.
A psychopath can restrain themselves if there’s a reason (like a threat of jail) but making them smarter reduces the need to hide. If you want them to do good, you need to fix their mind—in my case, that meant correcting my fucked up hormone system. I have no idea where to even start for a real psychopath, but there’s no reason to think that mere intelligence would help.
One concept in my moral system relies on the question of how you would respond to permanent retaliation, if you would go rogue. Could you stop an endless attack on your wellbeing because you do things that other people hate? In a world with many extremely intelligent beings this could be very difficult, and even in a world with only you as the bad Super-Einstein it would at least be tiresome (or resource-inefficient), so one super intelligent individual would possibly prefer a situation where they do not need to defend themselves indefinitely. This is kind of similar to the outcome of Wait-But-Why’s concept of the cudgel (browser search for “cudgel”). Ultimately this concept relies heavily on having at least some possibility of giving a Super-Einstein a small but ineradicable pain. So in my opinion, it is not really applicable to a singularity event. But it could be useful for slower developments.
Pain can also be defined for non-biological beings. For me it is just a word indicating something undesirable hardwired into your being. And maybe there is something undesirable for everything in the universe. One rather metaphysical concept could be a virtue of inertia (described as the resistance of any physical object to any change in its velocity). So you could argue, if you understand the movement of an entity (more concretely its goals), you could find a way to harm it (with another movement) which would result in “pain” for the entity. This concept is still very anthropozentric, so I am not sure, if the change in the movement could lead to or already be understood as a positive outcome for humanity. Or maybe it is not registered at all.
This is known as the orthogonality thesis, that intelligence and rationality don’t dictate your values. I don’t have time right now to explain the whole thing but it’s talked about extensively in the sequences if you want to read more. I think it’s pretty widely accepted around here as well.
My “intuition pump” is to imagine a superintelligent gigantic spider. Not some alien with human values in a spider body, but actual spider that was ‘magically’ increased and given IQ 500.
The Orthogonality Thesis tag is a good place to start.
I think it scales, and applies to any type of intelligence. It doesn’t seem that more intelligent humans are particularly more altruistic (though they tend to be richer, so less obvious about their motivations). There’s no reason (that I see) to think that even further intelligence would make humans more likely to care about the less-intelligent groups more (or less) than they do now.
The orthogonality thesis is usually used with AI, because that topic is where it actually matters, but the overarching idea applies to any mind. Making something smarter does not give it morals.
And no, I bet that the psychopaths would use their newfound powers to blend in and manipulate people better. Overt crime would drop, and subtler harm would go up. That’s what happens in the real world across the real intelligence gradient.
I’m not a sociopath, but I was a sociopath-lite before transitioning (minimal emotion, sadistic streak, almost no empathy). I once sat and listened to my girlfriend pour her heart out in extreme emotional pain and I just did not care. I wanted her to shut up and let me get back to my game. She was annoying.
Telling 2016!raven to reason her way into morals is like if I told you to reason your way into seeing gamma rays. It’s just not gonna happen. Sure, you can approximate it, but that’s not the same.
A psychopath can restrain themselves if there’s a reason (like a threat of jail) but making them smarter reduces the need to hide. If you want them to do good, you need to fix their mind—in my case, that meant correcting my fucked up hormone system. I have no idea where to even start for a real psychopath, but there’s no reason to think that mere intelligence would help.
One concept in my moral system relies on the question of how you would respond to permanent retaliation, if you would go rogue. Could you stop an endless attack on your wellbeing because you do things that other people hate? In a world with many extremely intelligent beings this could be very difficult, and even in a world with only you as the bad Super-Einstein it would at least be tiresome (or resource-inefficient), so one super intelligent individual would possibly prefer a situation where they do not need to defend themselves indefinitely. This is kind of similar to the outcome of Wait-But-Why’s concept of the cudgel (browser search for “cudgel”). Ultimately this concept relies heavily on having at least some possibility of giving a Super-Einstein a small but ineradicable pain. So in my opinion, it is not really applicable to a singularity event. But it could be useful for slower developments.
Pain can also be defined for non-biological beings. For me it is just a word indicating something undesirable hardwired into your being. And maybe there is something undesirable for everything in the universe. One rather metaphysical concept could be a virtue of inertia (described as the resistance of any physical object to any change in its velocity). So you could argue, if you understand the movement of an entity (more concretely its goals), you could find a way to harm it (with another movement) which would result in “pain” for the entity. This concept is still very anthropozentric, so I am not sure, if the change in the movement could lead to or already be understood as a positive outcome for humanity. Or maybe it is not registered at all.