Most people are fine with absolutely anything that doesn’t hurt them or their immediate family and friends and isn’t broadly condemned by their community, no matter how badly it hurts others outside their circle. In fact, the worse it is, the less likely people are to see it as a bad thing, because doing so would be more painful. Most denials of this are empty virtue signalling.
Corollary: If an AI were aligned to the values of the average person, it would leave a lot of extremely important issues up in the air, to say the least.
Mainly if they’re willing to disagree with social consensus out of concern for the welfare of those outside of the circle of consideration their community has constructed. Most people deny that their moral beliefs are formed basically just from what’s popular, even if they do happen to conform to what’s popular, and are ready with plenty of rationalizations to that effect. For example, they think they would come to the same conclusions they do now in a more regressive society such as 1800s America or Nazi Germany, because their moral beliefs were formed from a thoughtful and empathetic consideration of the state of the world, and just happened to align with local consensus on everything, when this is unlikely to be the case and is also what people generally thought in those more regressive societies.
It’s a fair question as I can see my statement can come across as some self-aggrandizing declaration of my own moral purity in comparison to others. It’s more that I wish more people could think critically about what ethical considerations enter their concern, rather than what usually happens, which is that society converges to some agreed-upon schelling point roughly corresponding to “those with at least this much social power matter”
Related observation: though people care about broader ethical considerations than just themselves and their family as dictated by the social mores they live in, even those considerations tend not to be consequentialist in nature: people are fine if something bad by the standards of consensus morality happens, as long as they didn’t personally do anything “wrong”. Only the interests of self, family and close friends rise to the level of caring about actual results.
Mainly if they’re willing to disagree with social consensus out of concern for the welfare of those outside of the circle of consideration their community has constructed
An assertion that most people are fine with things that are condoned by social consensus and doesn’t hurt them or their immediate family and friends is obviously different than what you said, though, because the “social consensus” is something designed by people, in many cases with the explicit goal of including circles wider than “them and their friends”.
To me it doesn’t seem to be? “condoned by social consensus” == “isn’t broadly condemned by their community” in the original comment. And
because the “social consensus” is something designed by people, in many cases with the explicit goal of including circles wider than “them and their friends”
doesn’t seem to work unless you believe a majority of people are both actively designing the “social consensus” and have this goal; majority of people who design the consensus having this as a goal is not sufficient.
“Men care for what they, themselves, expect to suffer or gain; and so long as they do not expect it to redound upon themselves, their cruelty and carelessness is without limit.”-Quirinus Quirrell
This seems likely, but what is your evidence for it?
For one, the documentary Dominion seems to bear this out pretty well. This is certainly an “ideal” situation where cruelty and carelessness will never rebound upon the people carrying it out.
That’s a documentary about factory farming, yes? What people do to lower animals doesn’t necessarily reflect what they’ll do to their own species. Most people here want to exterminate mosquitoes to fight diseases like malaria. Most people here do not want to exterminate human beings.
Most people are fine with absolutely anything that doesn’t hurt them or their immediate family and friends and isn’t broadly condemned by their community, no matter how badly it hurts others outside their circle. In fact, the worse it is, the less likely people are to see it as a bad thing, because doing so would be more painful. Most denials of this are empty virtue signalling.
Corollary: If an AI were aligned to the values of the average person, it would leave a lot of extremely important issues up in the air, to say the least.
>Most denials of this are empty virtue signalling.
How would you tell which ones aren’t, from a god’s eye perspective?
Mainly if they’re willing to disagree with social consensus out of concern for the welfare of those outside of the circle of consideration their community has constructed. Most people deny that their moral beliefs are formed basically just from what’s popular, even if they do happen to conform to what’s popular, and are ready with plenty of rationalizations to that effect. For example, they think they would come to the same conclusions they do now in a more regressive society such as 1800s America or Nazi Germany, because their moral beliefs were formed from a thoughtful and empathetic consideration of the state of the world, and just happened to align with local consensus on everything, when this is unlikely to be the case and is also what people generally thought in those more regressive societies.
It’s a fair question as I can see my statement can come across as some self-aggrandizing declaration of my own moral purity in comparison to others. It’s more that I wish more people could think critically about what ethical considerations enter their concern, rather than what usually happens, which is that society converges to some agreed-upon schelling point roughly corresponding to “those with at least this much social power matter”
Related observation: though people care about broader ethical considerations than just themselves and their family as dictated by the social mores they live in, even those considerations tend not to be consequentialist in nature: people are fine if something bad by the standards of consensus morality happens, as long as they didn’t personally do anything “wrong”. Only the interests of self, family and close friends rise to the level of caring about actual results.
An assertion that most people are fine with things that are condoned by social consensus and doesn’t hurt them or their immediate family and friends is obviously different than what you said, though, because the “social consensus” is something designed by people, in many cases with the explicit goal of including circles wider than “them and their friends”.
To me it doesn’t seem to be? “condoned by social consensus” == “isn’t broadly condemned by their community” in the original comment. And
doesn’t seem to work unless you believe a majority of people are both actively designing the “social consensus” and have this goal; majority of people who design the consensus having this as a goal is not sufficient.
“Men care for what they, themselves, expect to suffer or gain; and so long as they do not expect it to redound upon themselves, their cruelty and carelessness is without limit.”-Quirinus Quirrell
This seems likely, but what is your evidence for it?
For one, the documentary Dominion seems to bear this out pretty well. This is certainly an “ideal” situation where cruelty and carelessness will never rebound upon the people carrying it out.
That’s a documentary about factory farming, yes? What people do to lower animals doesn’t necessarily reflect what they’ll do to their own species. Most people here want to exterminate mosquitoes to fight diseases like malaria. Most people here do not want to exterminate human beings.