> All the rest is an act of shared imagination. It’s a dream we weave around a status game. > They’re part of the dream of reality in which they exist, a dream that feels no less obvious and true to them than ours does to us. > Moral ‘truths’ are acts of imagination. They’re ideas we play games with.
IDK, I feel like you could say the same sentences truthfully about math, and if you “went with the overall vibe” of them, you might be confused and mistakenly think math was “arbitrary” or “meaningless”, or doesn’t have a determinate tendency, etc. Like, okay, if I say “one element of moral progress is increasing universalizability”, and you say “that’s just the thing your status cohort assigns high status”, I’m like, well, sure, but that doesn’t mean it doesn’t also have other interesting properties, like being a tendency across many different peoples; like being correlated with the extent to which they’re reflecting, sharing information, and building understanding; like resulting in reductionist-materialist local outcomes that have more of material local things that people otherwise generally seem to like (e.g. not being punched, having food, etc.); etc. It could be that morality has tendencies, but not without hormesis and mutually assured destrubtion and similar things that might be removed by aligned AI.
Like, okay, if I say “one element of moral progress is increasing universalizability”, and you say “that’s just the thing your status cohort assigns high status”, I’m like, well, sure, but that doesn’t mean it doesn’t also have other interesting properties, like being a tendency across many different peoples; like being correlated with the extent to which they’re reflecting, sharing information, and building understanding; like resulting in reductionist-materialist local outcomes that have more of material local things that people otherwise generally seem to like (e.g. not being punched, having food, etc.);
“Morality” is totally unlike mathematics where the rules can first be clearly defined, and we operate with that set of rules.
I believe “increasing universalizability” is a good example to prove OPs point. I don’t think it’s a common belief among “many different peoples” in any meaningful sense. I don’t even really understand what it entails. There may be a few nearly universal elements like “wanting food”, but destructive aspects are fundamental to our lives so you can’t just remove them without fundamentally altering our nature as human beings. Like a lot of people, I don’t mind being punched a little as long as (me / my family / my group) wins and gains more resources. I really want to see the people I hate being harmed, and would sacrifice a lot for it, that’s a very fundamental aspect of being human.
Why are you personally attacking me for discussing the topic at hand? I’m discussing human nature and giving myself as a counter-example, but I clearly meant that it applies to everyone in different ways. I will avoid personal examples since some people have a hard time understanding. I believe you are ironically proving my point by signaling against me based on my beliefs which you dislike.
Attacking you? I said I don’t want to be around you and don’t want to invest in you. I said it with a touch of snark (“remind me”).
> I clearly meant that it applies to everyone in different ways
Not clear to me. I don’t think everyone “would sacrifice a lot” to “see the people [they] hate being harmed”. I wouldn’t. I think behaving that way is inadvisable for you and harmful to others, and will tend to make you a bad investment opportunity.
“Morality” is totally unlike mathematics where the rules can first be clearly defined, and we operate with that set of rules.
By that description, mathematics is fairly unlike mathematics.
I don’t even really understand what it entails.
It entails that behavior that people consider moral, tends towards having the property that if everyone behaved like that, things would be good. Rule of law, equality before the law, Rawlsian veil of ignorance, stare decisis, equality of opportunity, the golden rule, liberty, etc. Generally, norms that are symmetric across space, time, context, and person. (Not saying we actually have these things, or that “most people” explicitly think these things are good, just that people tend to update in favor of these things.)
It entails that behavior that people consider moral, tends towards having the property that if everyone behaved like that, things would be good
This is just circular. What is “good”?
Rule of law, equality before the law, Rawlsian veil of ignorance, stare decisis, equality of opportunity, the golden rule, liberty, etc. Generally, norms that are symmetric across space, time, context, and person. (Not saying we actually have these things, or that “most people” explicitly think these things are good, just that people tend to update in favor of these things.)
Evidence that “most people” update in favor of these things? It seems like a very current western morality centric view, and you could probably get people to update in the opposite direction (and they did, many times in history).
>Evidence that “most people” update in favor of these things? It seems like a very current western morality centric view,
Yeah, I think you’re right that it’s biased towards Western. I think you can generate the obvious examples (e.g. law systems developing; e.g. various revolutions in the name of liberty and equality and against tyranny), and I’m not interested enough right now to come up with more comprehensive treatment of the evidence, and I’m not super confident. It could be interesting to see how this plays out in places where these tendencies seem least present. Is China such a place? (What do most people living in China really think of non-liberty, non-Rawlsianism, etc.?)
The above sentences, if taken (as you do) as claims about human moral psychology rather than normative ethics, are compatible with full-on moral realism. I.e. everyone’s moral attitudes are pushed around by status concerns, luckily we ended up in a community that ties status to looking for long-run implications of your beliefs and making sure they’re coherent, and so without having fundamentally different motivations to any other human being we were better able to be motivated by actual moral facts.
I know the OP is trying to say loudly and repeatedly that this isn’t the case because ‘everyone else thought that as well, don’t you know?’ with lots of vivid examples, but if that’s the only argument it seems like modesty epistemology—i.e. “most people who said the thing you said were wrong, and also said that they weren’t like all those other people who were wrong in the past for all these specific reasons, so you should believe you’re wrong too”.
I think a lot of this thread confuses moral psychology with normative ethics—most utilitarians know and understand that they aren’t solely motivated by moral concerns, and are also motivated by lots of other things. They know they don’t morally endorse those motivations in themselves, but don’t do anything about it, and don’t thereby change their moral views.
If Peter Singer goes and buys a coffee, it’s no argument at all to say “aha, by revealed preferences, you must not really think utilitarianism is true, or you’d have given the money away!” That doesn’t show that when he does donate money, he’s unmotivated by moral concerns.
Probably even this ‘pure’ motivation to act morally in cases where empathy isn’t much of an issue is itself made up of e.g. a desire not to be seen believing self-contradictory things, cognitive dissonance, basic empathy and so on. But so what? If the emotional incentives work to motivate people to form more coherent moral views, it’s the reliability of the process of forming the views that matter, not the motivation. I’m sure you could tell a similar story about the motivations that drive mathematicians to check their proofs are valid.
> All the rest is an act of shared imagination. It’s a dream we weave around a status game.
> They’re part of the dream of reality in which they exist, a dream that feels no less obvious and true to them than ours does to us.
> Moral ‘truths’ are acts of imagination. They’re ideas we play games with.
IDK, I feel like you could say the same sentences truthfully about math, and if you “went with the overall vibe” of them, you might be confused and mistakenly think math was “arbitrary” or “meaningless”, or doesn’t have a determinate tendency, etc. Like, okay, if I say “one element of moral progress is increasing universalizability”, and you say “that’s just the thing your status cohort assigns high status”, I’m like, well, sure, but that doesn’t mean it doesn’t also have other interesting properties, like being a tendency across many different peoples; like being correlated with the extent to which they’re reflecting, sharing information, and building understanding; like resulting in reductionist-materialist local outcomes that have more of material local things that people otherwise generally seem to like (e.g. not being punched, having food, etc.); etc. It could be that morality has tendencies, but not without hormesis and mutually assured destrubtion and similar things that might be removed by aligned AI.
“Morality” is totally unlike mathematics where the rules can first be clearly defined, and we operate with that set of rules.
I believe “increasing universalizability” is a good example to prove OPs point. I don’t think it’s a common belief among “many different peoples” in any meaningful sense. I don’t even really understand what it entails. There may be a few nearly universal elements like “wanting food”, but destructive aspects are fundamental to our lives so you can’t just remove them without fundamentally altering our nature as human beings. Like a lot of people, I don’t mind being punched a little as long as (me / my family / my group) wins and gains more resources. I really want to see the people I hate being harmed, and would sacrifice a lot for it, that’s a very fundamental aspect of being human.
Are you pursuing this to any great extent? If so, remind me to stay away from you and avoid investing in you.
Why are you personally attacking me for discussing the topic at hand? I’m discussing human nature and giving myself as a counter-example, but I clearly meant that it applies to everyone in different ways. I will avoid personal examples since some people have a hard time understanding. I believe you are ironically proving my point by signaling against me based on my beliefs which you dislike.
Attacking you? I said I don’t want to be around you and don’t want to invest in you. I said it with a touch of snark (“remind me”).
> I clearly meant that it applies to everyone in different ways
Not clear to me. I don’t think everyone “would sacrifice a lot” to “see the people [they] hate being harmed”. I wouldn’t. I think behaving that way is inadvisable for you and harmful to others, and will tend to make you a bad investment opportunity.
By that description, mathematics is fairly unlike mathematics.
It entails that behavior that people consider moral, tends towards having the property that if everyone behaved like that, things would be good. Rule of law, equality before the law, Rawlsian veil of ignorance, stare decisis, equality of opportunity, the golden rule, liberty, etc. Generally, norms that are symmetric across space, time, context, and person. (Not saying we actually have these things, or that “most people” explicitly think these things are good, just that people tend to update in favor of these things.)
This is just circular. What is “good”?
Evidence that “most people” update in favor of these things? It seems like a very current western morality centric view, and you could probably get people to update in the opposite direction (and they did, many times in history).
>Evidence that “most people” update in favor of these things? It seems like a very current western morality centric view,
Yeah, I think you’re right that it’s biased towards Western. I think you can generate the obvious examples (e.g. law systems developing; e.g. various revolutions in the name of liberty and equality and against tyranny), and I’m not interested enough right now to come up with more comprehensive treatment of the evidence, and I’m not super confident. It could be interesting to see how this plays out in places where these tendencies seem least present. Is China such a place? (What do most people living in China really think of non-liberty, non-Rawlsianism, etc.?)
The above sentences, if taken (as you do) as claims about human moral psychology rather than normative ethics, are compatible with full-on moral realism. I.e. everyone’s moral attitudes are pushed around by status concerns, luckily we ended up in a community that ties status to looking for long-run implications of your beliefs and making sure they’re coherent, and so without having fundamentally different motivations to any other human being we were better able to be motivated by actual moral facts.
I know the OP is trying to say loudly and repeatedly that this isn’t the case because ‘everyone else thought that as well, don’t you know?’ with lots of vivid examples, but if that’s the only argument it seems like modesty epistemology—i.e. “most people who said the thing you said were wrong, and also said that they weren’t like all those other people who were wrong in the past for all these specific reasons, so you should believe you’re wrong too”.
I think a lot of this thread confuses moral psychology with normative ethics—most utilitarians know and understand that they aren’t solely motivated by moral concerns, and are also motivated by lots of other things. They know they don’t morally endorse those motivations in themselves, but don’t do anything about it, and don’t thereby change their moral views.
If Peter Singer goes and buys a coffee, it’s no argument at all to say “aha, by revealed preferences, you must not really think utilitarianism is true, or you’d have given the money away!” That doesn’t show that when he does donate money, he’s unmotivated by moral concerns.
Probably even this ‘pure’ motivation to act morally in cases where empathy isn’t much of an issue is itself made up of e.g. a desire not to be seen believing self-contradictory things, cognitive dissonance, basic empathy and so on. But so what? If the emotional incentives work to motivate people to form more coherent moral views, it’s the reliability of the process of forming the views that matter, not the motivation. I’m sure you could tell a similar story about the motivations that drive mathematicians to check their proofs are valid.