People often profess beliefs to label themselves as members of a group. So far as I can tell, the belief that some things are moral and other things are not is one of those beliefs. I don’t have any other explanation for why people talk so much about something that isn’t subject to experimentation.
Well, of course it’s subject to experimentation, or at least real-world testing: do other agents consider you sufficiently trustworthy to deal with? In the indefinitely-iterated prisoner’s dilemma we call society, are you worth the effort of even trying to deal with?
(This is not directly relevant to your topic, but it jumped out at me.)
I’m not sure I understand the hypothesis. Surely you are not suggesting that people signal their adherence to consequentialism rather than deontological versions of ethics as a way of convincing rational agents to trust them.
I think they signal deontological ethics (cached rules), whatever their internal moral engine actually uses. “I am predictable, you can trust me not to defect!” I suspect it ties into ingroup identification as well.
I need to write up my presently half-baked notions as a discussion post, probably after rereading the metaethics and ethical injunctions sequences in case it’s already covered.
If you accept morality being a statement about strategies in the multiplayer game, then I agree. However, if you take the usual stand that “action X is moral” is a statement that is either true or false, no matter what anyone else thinks, then whether your moral system leads other people to trust you is irrelevant.
Here’s an environment where it makes a practical difference. At one point my dad tried (and failed) to get me to have “racial consciousness”. I didn’t pay much attention, but I gather he meant that I should not be color-blind in my social interactions so racist white people would trust me. He’s not stupid, so I assume there really were enough whites with that flavor of racism, somewhere, to form an in-group of meaningful size. Thus, if you accept that morality is about getting a specific in-group to trust you, racism is moral for the purposes of signalling membership in that specific in-group.
That conclusion just seems too repugnant to me. I’d rather stop using the word “moral” than use it with that definition. I won’t argue definitions with people, but I do want to point out that that definition leads to some odd-looking patterns of words in true statements.
Er, your second paragraph appears to say “morality is part of signaling therefore signaling is part of morality therefore the repugnance of a given use of signaling disproves your thesis.” (Please correct me if I’ve misparsed it.) I’m not sure any of those “therefores” work, particularly the first one (which is a simple “A in B therefore B in A” fallacy).
I’ve probably just failed to explain what I’m saying particularly well. I’ve been trying to sharpen the idea in discussions elsewhere and I’m discovering how LessWronged I’ve become, because I’ve had to go down two levels and I’m now explaining the very concept of cognitive biases and how provably gibberingly delusional humans are about themselves. I just had to supply a reference to show that people tend to pick up their beliefs from the people they associate with, and that priming exists … I can see why EY wrote the sequences.
Er, your second paragraph appears to say “morality is part of signaling therefore
signaling is part of morality therefore the repugnance of a given use of signaling
disproves your thesis.”
If the phrase “your thesis” refers to your claim:
Well, of course [statements about morality are] subject to experimentation, or at least real-world testing: do other
agents consider you sufficiently trustworthy to deal with? In the indefinitely-iterated
prisoner’s dilemma we call society, are you worth the effort of even trying to deal
with?
then we’re failing to communicate. For me to agree or disagree with your statement requires me to guess what you mean by “morality”, and for my best guess, I agree with your statement. I said what my best guess was, and pointed out that it leads to some odd-looking true statements, such as “racism is moral” with all those qualifications I said.
Of course, if “your thesis” meant something else, then we’re failing to communicate in a less interesting way because I have no clue WTF you mean.
(Please correct me if I’ve misparsed it.)
I think you have. I was guessing that you’re saying morality is a useful concept and you’re defining it to be is a type of signaling. If you meant something else please clarify. That’s a fine definition, and we can use it if you want, but it leads to the odd conclusion that racism is moral if you’re trying to signal to a group of racists. If you accept that conclusion, that’s great, we have a definition of morality that has odd consequences but it has the advantage of being empirically testable.
If you don’t like to admit the assertion “racism is moral” with the abovementioned qualifications, we need a different definition of morality. Ideally that different definition would still let us empirically test whether “X is moral”. I don’t know what that definition would be.
No, you’ve just explicitly clarified that you are in fact making an “A is a subset of B, therefore B is a subset of A” fallacy, with A=morality and B=signaling. Moralities being a subset of signaling (and I’m not saying it’s a strict subset anyway, but a combination of practical game theory and signaling; I’d be unsurprised, of course, to find there was more) does not, in logic, imply that all signaling (e.g. racism, to use your example) is therefore a subset of morality. That’s a simple logical fallacy, though the Latin name for it doesn’t spring to mind. It’s only not a fallacy if the two are identical or being asserted to be identical (or, for practical discussion, substantially identical), and I’m certainly not asserting that—there is plenty of signaling that is nothing to do with moralities.
Remember: if you find yourself making an assertion that someone else’s statement that A is a subset of B therefore implies that B is a subset of A, you’re doing it wrong, unless A is pretty much all of B (such that if you know something is in B, it’s very likely to be in A). If you still think that in the case you’re considering A⊂B ⇒ B⊂A, you should do the numbers.
I proposed abandoning the word “morality” because it’s too muddled. You want to use it. I have repeatedly tried to guess what you mean by it, and you’ve claimed I’m wrong every time. Please define what you mean by “morality” if you wish to continue.
Well, of course it’s subject to experimentation, or at least real-world testing: do other agents consider you sufficiently trustworthy to deal with? In the indefinitely-iterated prisoner’s dilemma we call society, are you worth the effort of even trying to deal with?
(This is not directly relevant to your topic, but it jumped out at me.)
I’m not sure I understand the hypothesis. Surely you are not suggesting that people signal their adherence to consequentialism rather than deontological versions of ethics as a way of convincing rational agents to trust them.
I think they signal deontological ethics (cached rules), whatever their internal moral engine actually uses. “I am predictable, you can trust me not to defect!” I suspect it ties into ingroup identification as well.
I need to write up my presently half-baked notions as a discussion post, probably after rereading the metaethics and ethical injunctions sequences in case it’s already covered.
If you accept morality being a statement about strategies in the multiplayer game, then I agree. However, if you take the usual stand that “action X is moral” is a statement that is either true or false, no matter what anyone else thinks, then whether your moral system leads other people to trust you is irrelevant.
Here’s an environment where it makes a practical difference. At one point my dad tried (and failed) to get me to have “racial consciousness”. I didn’t pay much attention, but I gather he meant that I should not be color-blind in my social interactions so racist white people would trust me. He’s not stupid, so I assume there really were enough whites with that flavor of racism, somewhere, to form an in-group of meaningful size. Thus, if you accept that morality is about getting a specific in-group to trust you, racism is moral for the purposes of signalling membership in that specific in-group.
That conclusion just seems too repugnant to me. I’d rather stop using the word “moral” than use it with that definition. I won’t argue definitions with people, but I do want to point out that that definition leads to some odd-looking patterns of words in true statements.
Er, your second paragraph appears to say “morality is part of signaling therefore signaling is part of morality therefore the repugnance of a given use of signaling disproves your thesis.” (Please correct me if I’ve misparsed it.) I’m not sure any of those “therefores” work, particularly the first one (which is a simple “A in B therefore B in A” fallacy).
I’ve probably just failed to explain what I’m saying particularly well. I’ve been trying to sharpen the idea in discussions elsewhere and I’m discovering how LessWronged I’ve become, because I’ve had to go down two levels and I’m now explaining the very concept of cognitive biases and how provably gibberingly delusional humans are about themselves. I just had to supply a reference to show that people tend to pick up their beliefs from the people they associate with, and that priming exists … I can see why EY wrote the sequences.
If the phrase “your thesis” refers to your claim:
then we’re failing to communicate. For me to agree or disagree with your statement requires me to guess what you mean by “morality”, and for my best guess, I agree with your statement. I said what my best guess was, and pointed out that it leads to some odd-looking true statements, such as “racism is moral” with all those qualifications I said.
Of course, if “your thesis” meant something else, then we’re failing to communicate in a less interesting way because I have no clue WTF you mean.
I think you have. I was guessing that you’re saying morality is a useful concept and you’re defining it to be is a type of signaling. If you meant something else please clarify. That’s a fine definition, and we can use it if you want, but it leads to the odd conclusion that racism is moral if you’re trying to signal to a group of racists. If you accept that conclusion, that’s great, we have a definition of morality that has odd consequences but it has the advantage of being empirically testable.
If you don’t like to admit the assertion “racism is moral” with the abovementioned qualifications, we need a different definition of morality. Ideally that different definition would still let us empirically test whether “X is moral”. I don’t know what that definition would be.
No, you’ve just explicitly clarified that you are in fact making an “A is a subset of B, therefore B is a subset of A” fallacy, with A=morality and B=signaling. Moralities being a subset of signaling (and I’m not saying it’s a strict subset anyway, but a combination of practical game theory and signaling; I’d be unsurprised, of course, to find there was more) does not, in logic, imply that all signaling (e.g. racism, to use your example) is therefore a subset of morality. That’s a simple logical fallacy, though the Latin name for it doesn’t spring to mind. It’s only not a fallacy if the two are identical or being asserted to be identical (or, for practical discussion, substantially identical), and I’m certainly not asserting that—there is plenty of signaling that is nothing to do with moralities.
Remember: if you find yourself making an assertion that someone else’s statement that A is a subset of B therefore implies that B is a subset of A, you’re doing it wrong, unless A is pretty much all of B (such that if you know something is in B, it’s very likely to be in A). If you still think that in the case you’re considering A⊂B ⇒ B⊂A, you should do the numbers.
I proposed abandoning the word “morality” because it’s too muddled. You want to use it. I have repeatedly tried to guess what you mean by it, and you’ve claimed I’m wrong every time. Please define what you mean by “morality” if you wish to continue.