How sure are you that most human moral disagreements are attributable to
- lack of veridical information, or
- lack of ability/tools to work through that information, or
- defects?
You talk freely about psychopaths and non-psychopaths as though these were distinct categories of non-defective and defective humans. I know you know this is not so. The arguments about psychological unity of humankind only extend so far. e.g., would you be prepared to tell a homosexual that, if they were fully informed, they would decide to take a pill to change their orientation?
I don’t claim to be sure at all. It does seem to me that most modern humans are very much creatures of their own time; they don’t consider that future moral progress could be as radical as the change between their own time and Archimedes’s; they think of themselves as the wise, the educated, the modern, not as the savage barbarian children the Future will very likely regard us as. It also seems to me that people fail to systematically distinguish between terminal and instrumental disputes, and that they demonize their enemies (which is correspondence bias). The basic ev-bio necessity behind the psychological unity of human brains is not widely understood.
And even more importantly, the portion of our values that we regard as transpersonal, the portion we would intervene to enforce against others, is not all of our values; it’s not going to include a taste for pepperoni pizza, or in my case, it’s not going to include a notion of heterosexuality or homosexuality.
If there are distinct categories of human transpersonal values, I would expect them to look like “male and female babies”, “male children”, “male adults”, “female children”, “female adults”, “neurological damage 1″, “neurological damage 2”, not “Muslims vs. Christians!”
Roko:
If you told me that my ability to care about other people was neurologically damaged, and you offered me a pill to fix it, I would take it.
- no, you wouldn’t. The only reason that you are now saying that you would take it is that you currently have the ability to care about other people.
I said “damaged” not “missing”. The notion is that I am my current self, but one day you inform me that, relative to other humans, my ability to care about others is damaged. Do I want a pill to fix the damage, even though it will change my values? Yes, because I value humanity and want to stay with humanity; I don’t want to be off in some lonely unoccupied volume of mindspace. This is one of the arguments that moves me.
(Albeit if the damage was in any way entwined with my ability to do Singularity work, I would delay taking the pill.)
Let me spell it out. Every human mind comes with an evolved set of “yuck” factors (and their opposite, which I might call “yum” factors?). This is the “psychological unity of humankind”. Unfortunately, these cover only those situations which we were likely to run into in our EEA. Abortion probably did not exist in our EEA: so people have to compare it to something that did. There are two ways to do this—either you think of it as being just like helping a fellow member of your tribe, and become pro-abortion, or you think of it as being infanticide and become anti abortion. Beyond these “yuck” factors, there is no further unity to the moral views of humankind.
The question is how much of this disagreement would persist if the disputants had full veridical knowledge of everything that goes on inside the developing fetus and full veridical knowledge of a naturalistic universe. Given the same “yum” and “yuck” factors, why would they be differently targeted? If there are no interpersonally compelling reasons to target them one way or the other, how would a group of fully informed minds come to believe that this was a proper issue of transpersonal morality?
By the way, quite a number of hunter-gatherer tribes practice infanticide as a form of birth control.
Zubon:
If we are talking about a difference valued at 0.0000001% of a human life, and you extrapolate it over a billion lives, we are talking about life and death matters. Successful AI will affect more than a billion lives.
If I have a value judgment that would not be interpersonally compelling to a supermajority of humankind even if they were fully informed, then it is proper for me to personally fight for and advocate that value judgment, but not proper for me to preemptively build an AI that enforces that value judgment upon the rest of humanity. The notion of CEV reflects this statement of morality I have just made.
Virge:
I don’t claim to be sure at all. It does seem to me that most modern humans are very much creatures of their own time; they don’t consider that future moral progress could be as radical as the change between their own time and Archimedes’s; they think of themselves as the wise, the educated, the modern, not as the savage barbarian children the Future will very likely regard us as. It also seems to me that people fail to systematically distinguish between terminal and instrumental disputes, and that they demonize their enemies (which is correspondence bias). The basic ev-bio necessity behind the psychological unity of human brains is not widely understood.
And even more importantly, the portion of our values that we regard as transpersonal, the portion we would intervene to enforce against others, is not all of our values; it’s not going to include a taste for pepperoni pizza, or in my case, it’s not going to include a notion of heterosexuality or homosexuality.
If there are distinct categories of human transpersonal values, I would expect them to look like “male and female babies”, “male children”, “male adults”, “female children”, “female adults”, “neurological damage 1″, “neurological damage 2”, not “Muslims vs. Christians!”
Roko:
I said “damaged” not “missing”. The notion is that I am my current self, but one day you inform me that, relative to other humans, my ability to care about others is damaged. Do I want a pill to fix the damage, even though it will change my values? Yes, because I value humanity and want to stay with humanity; I don’t want to be off in some lonely unoccupied volume of mindspace. This is one of the arguments that moves me.
(Albeit if the damage was in any way entwined with my ability to do Singularity work, I would delay taking the pill.)
The question is how much of this disagreement would persist if the disputants had full veridical knowledge of everything that goes on inside the developing fetus and full veridical knowledge of a naturalistic universe. Given the same “yum” and “yuck” factors, why would they be differently targeted? If there are no interpersonally compelling reasons to target them one way or the other, how would a group of fully informed minds come to believe that this was a proper issue of transpersonal morality?
By the way, quite a number of hunter-gatherer tribes practice infanticide as a form of birth control.
Zubon:
If I have a value judgment that would not be interpersonally compelling to a supermajority of humankind even if they were fully informed, then it is proper for me to personally fight for and advocate that value judgment, but not proper for me to preemptively build an AI that enforces that value judgment upon the rest of humanity. The notion of CEV reflects this statement of morality I have just made.