The lives of most evildoers are of course largely incredibly prosaic, and I find it hard to believe their values in their most prosaic doings are that dissimilar from everyone else around the world doing prosaic things.
I think that thinking in terms of good and evil belies a closet-realist approach to the problem. In reality, there are different people, with different cultures and biologically determined drives. These cultural and biological factors determine (approximately) a set of traditions, worldviews, ethical principles and moral rules, which can undergo a process of reflective equilibrium to determine a set of consistent preferences over the physical world.
We don’t know how the reflective equilibrium thing will go, but we know that it could depend upon the set of traditions, ethical principles and moral rules that go into it.
If someone is an illiterate devout pentecostal Christian who lives in a village in Angola, the eventual output of the preference formation process applied to them might be very different than if it were applied to the typical LW reader.
They’re not evil. They just might have a very different “should function” than me.
I think part of the point of what you call “moral anti-realism” is that it frees up words like “evil” so that they can refer to people who have particular kinds of “should function”, since there’s nothing cosmic that the word could be busy referring to instead.
If I had to offer a demonology, I guess I might loosely divide evil minds into: 1) those capable of serious moral reflection but avoiding it, e.g. because they’re busy wallowing in negative other-directed emotion, 2) those engaging in serious moral reflection but making cognitive mistakes in doing so, 3) those whose moral reflection genuinely outputs behavior that strongly conflicts with (the extension of) one’s own values. I think 1 comes closest to what’s traditionally meant by “evil”, with 2 being more “misguided” and 3 being more “Lovecraftian”. As I understand it, CEV is problematic if most people are “Lovecraftian” but less so if they’re merely “evil” or “misguided”, and I think you may in general be too quick to assume Lovecraftianity. (ETA: one main reason why I think this is that I don’t see many people actually retaining values associated with wrong belief systems when they abandon those belief systems; do you know of many atheists who think atheists or even Christians should burn in hell?)
“One main reason why I think this is that I don’t see many people actually retaining values associated with wrong belief systems when they abandon those belief systems; do you know of many atheists who think atheists or even Christians should burn in hell?)”
One main reason why you don’t see that happening is that the set of beliefs that you consider “right beliefs” is politically influenced, i.e. human beliefs come in certain patterns which are not connected in themselves, but are connected by the custom that people who hold one of the beliefs usually hold the others.
For example, I knew a woman (an agnostic) who favored animal rights, and some group on this basis sent her literature asking for her help with pro-abortion activities, namely because this is a typical pattern: People favoring animal rights are more likely to be pro-abortion. But she responded, “Just because I’m against torturing animals doesn’t mean I’m in favor of killing babies,” evidently quite a logical response, but not in accordance with the usual pattern.
In other words, your own values are partly determined by political patterns, and if they weren’t (which they wouldn’t be under CEV) you might well see people retaining values you dislike when they extrapolate.
As I understand it, CEV is problematic if most people are “Lovecraftian” but less so if they’re merely “evil” or “misguided”, and I think you may in general be too quick to assume Lovecraftianity.
Most people may or may not be “Lovecraftian”, but why take that risk?
There are gains from cooperating with as many others as possible. Maybe these and other factors outweigh the risk or maybe they don’t; the lower the probability and extent of Lovecraftianity, the more likely it is that they do.
Anyway, I’m not making any claims about what to do, I’m just saying people probably aren’t as Lovecraftian as Roko thinks, which I conclude both from introspection and from the statistics of what moral change we actually see in humans.
There are gains from cooperating with as many others as possible. Maybe these and other factors outweigh the risk or maybe they don’t; the lower the probability and extent of Lovecraftianity, the more likely it is that they do.
I agree that “probability and extent of Lovecraftianity” would be an important consideration if it were a matter of cooperation, and of deciding how many others to cooperate with, but Eliezer’s motivation in giving everyone equal weighting in CEV is altruism rather than cooperation. If it were cooperation, then the weights would be adjusted to account for contribution or bargaining power, instead of being equal.
Anyway, I’m not making any claims about what to do, I’m just saying people probably aren’t as Lovecraftian as Roko thinks, which I conclude both from introspection and from the statistics of what moral change we actually see in humans.
To reiterate, “how Lovecraftian” isn’t really the issue. Just by positing the possibility that most humans might turn out to be Lovecraftian, you’re operating in a meta-ethical framework at odds with Eliezer’s, and in which it doesn’t make sense to give everyone equal weight in CEV (or at least you’ll need a whole other set of arguments to justify that).
That aside, the statistics you mention might also be skewed by an anthropic selection effect.
If someone is an illiterate devout pentecostal Christian who lives in a village in Angola, the eventual output of the preference formation process applied to them might be very different than if it were applied to the typical LW reader.
Consider the distinction between whether the output of a preference-aggregation algorithm will be very different for the Angolan Christian, and whether it should be very different. Some preference-aggregation algorithms may just be confused into giving diverging results because of inconsequential distinctions, which would be bad news for everyone, even the “enlightened” westerners.
(To be precise, the relevant factual statement is about whether any two same-culture people get preferences visibly closer to each other than any two culturally distant people. It’s like with relatively small genetic relevance of skin color, where within-race variation is greater than between-races variation.)
I think we agree about this actually—several people’s picture of someone with alien values was an Islamic fundamentalist, and they were the “evildoers” I have in mind...
The lives of most evildoers are of course largely incredibly prosaic, and I find it hard to believe their values in their most prosaic doings are that dissimilar from everyone else around the world doing prosaic things.
I wasn’t think of evildoers. I was thinking of people who are just different, and have their own culture, traditions and way of life.
I think that thinking in terms of good and evil belies a closet-realist approach to the problem. In reality, there are different people, with different cultures and biologically determined drives. These cultural and biological factors determine (approximately) a set of traditions, worldviews, ethical principles and moral rules, which can undergo a process of reflective equilibrium to determine a set of consistent preferences over the physical world.
We don’t know how the reflective equilibrium thing will go, but we know that it could depend upon the set of traditions, ethical principles and moral rules that go into it.
If someone is an illiterate devout pentecostal Christian who lives in a village in Angola, the eventual output of the preference formation process applied to them might be very different than if it were applied to the typical LW reader.
They’re not evil. They just might have a very different “should function” than me.
I think part of the point of what you call “moral anti-realism” is that it frees up words like “evil” so that they can refer to people who have particular kinds of “should function”, since there’s nothing cosmic that the word could be busy referring to instead.
If I had to offer a demonology, I guess I might loosely divide evil minds into: 1) those capable of serious moral reflection but avoiding it, e.g. because they’re busy wallowing in negative other-directed emotion, 2) those engaging in serious moral reflection but making cognitive mistakes in doing so, 3) those whose moral reflection genuinely outputs behavior that strongly conflicts with (the extension of) one’s own values. I think 1 comes closest to what’s traditionally meant by “evil”, with 2 being more “misguided” and 3 being more “Lovecraftian”. As I understand it, CEV is problematic if most people are “Lovecraftian” but less so if they’re merely “evil” or “misguided”, and I think you may in general be too quick to assume Lovecraftianity. (ETA: one main reason why I think this is that I don’t see many people actually retaining values associated with wrong belief systems when they abandon those belief systems; do you know of many atheists who think atheists or even Christians should burn in hell?)
“One main reason why I think this is that I don’t see many people actually retaining values associated with wrong belief systems when they abandon those belief systems; do you know of many atheists who think atheists or even Christians should burn in hell?)”
One main reason why you don’t see that happening is that the set of beliefs that you consider “right beliefs” is politically influenced, i.e. human beliefs come in certain patterns which are not connected in themselves, but are connected by the custom that people who hold one of the beliefs usually hold the others.
For example, I knew a woman (an agnostic) who favored animal rights, and some group on this basis sent her literature asking for her help with pro-abortion activities, namely because this is a typical pattern: People favoring animal rights are more likely to be pro-abortion. But she responded, “Just because I’m against torturing animals doesn’t mean I’m in favor of killing babies,” evidently quite a logical response, but not in accordance with the usual pattern.
In other words, your own values are partly determined by political patterns, and if they weren’t (which they wouldn’t be under CEV) you might well see people retaining values you dislike when they extrapolate.
Most people may or may not be “Lovecraftian”, but why take that risk?
There are gains from cooperating with as many others as possible. Maybe these and other factors outweigh the risk or maybe they don’t; the lower the probability and extent of Lovecraftianity, the more likely it is that they do.
Anyway, I’m not making any claims about what to do, I’m just saying people probably aren’t as Lovecraftian as Roko thinks, which I conclude both from introspection and from the statistics of what moral change we actually see in humans.
I agree that “probability and extent of Lovecraftianity” would be an important consideration if it were a matter of cooperation, and of deciding how many others to cooperate with, but Eliezer’s motivation in giving everyone equal weighting in CEV is altruism rather than cooperation. If it were cooperation, then the weights would be adjusted to account for contribution or bargaining power, instead of being equal.
To reiterate, “how Lovecraftian” isn’t really the issue. Just by positing the possibility that most humans might turn out to be Lovecraftian, you’re operating in a meta-ethical framework at odds with Eliezer’s, and in which it doesn’t make sense to give everyone equal weight in CEV (or at least you’ll need a whole other set of arguments to justify that).
That aside, the statistics you mention might also be skewed by an anthropic selection effect.
Alternately: They’re evil. They have a very different ‘should function’ to me.
Consider the distinction between whether the output of a preference-aggregation algorithm will be very different for the Angolan Christian, and whether it should be very different. Some preference-aggregation algorithms may just be confused into giving diverging results because of inconsequential distinctions, which would be bad news for everyone, even the “enlightened” westerners.
(To be precise, the relevant factual statement is about whether any two same-culture people get preferences visibly closer to each other than any two culturally distant people. It’s like with relatively small genetic relevance of skin color, where within-race variation is greater than between-races variation.)
I think we agree about this actually—several people’s picture of someone with alien values was an Islamic fundamentalist, and they were the “evildoers” I have in mind...