So we agree, at a minimum, that moral rules aren’t just ‘social rules.’ They may be a special kind of social rule. To figure that out, first explain to me: What makes a rule ‘social’? Is any rule made up by anyone at all, that pertains to interactions between people, a ‘social rule’? Or is a social rule a rule that’s employed by a whole social group? Or is it a rule that’s accepted as legitimate and binding upon a social group, by some relevant authority or consensus?
One of these characteristics is that people take them super seriously, even to the point of believing that they exist outside their heads, and don’t believe that they’re “just” social rules.
Most people don’t think that even frivolous, non-super-serious rules live inside their skulls. Baseball players don’t think baseball is magic, but they also don’t think the rules of baseball are neuronal states. (Whose skulls would the rules get to reside in? Is there a single ruleset spread across lots of brains, or does each brain have its own unique set of baseball rules?)
As for altruism, I share your preferences. So we can isolate the meta-ethical question from the normative one.
So we agree, at a minimum, that moral rules aren’t just ‘social rules.’ They may be a special kind of social rule. To figure that out, first explain to me: What makes a rule ‘social’? Is any rule made up by anyone at all, that pertains to interactions between people, a ‘social rule’? Or is a social rule a rule that’s employed by a whole social group? Or is it a rule that’s accepted as legitimate and binding upon a social group, by some relevant authority or consensus?
This seems like a definitional consideration. Maybe we could skip that stage. What does it matter what counts as a moral rule? My guess: moral rules are “more important” than non-moral rules. What does more important mean in this context? Maybe typical punishments/ostracism for breaking them are higher, or maybe your brain just feels like they’re more important.
Picture two people arguing over whether gays “should” be allowed to marry. Both are perfectly aware of statistics related to preferences for/against gay marriage and all other relevant information. Their model of the world is the same, so what are they arguing about?
Now let’s say there are two grown people collaborating on a fictional universe. One thinks one thing about the universe, and the other thinks another. Can you imagine them having a serious debate about what the fictional universe is “actually” like? I think it’s much more likely they would argue over what things should be like in order to make an interesting/cool universe than have an object-level argument over universe properties.
The rules of marriage are fictional like a fictional universe. In some cases, people advance very serious arguments about the “truth” of things that are fictional. This is very common for social rules/morality. I label these “attitude claims” in my post.
Suppose you’re living in WW2-era Germany, and you learn of a law against helping gypsies. You see a gypsy in need, and come to the conclusion that you’re morally obliged to help that gypsy; but you shirk your felt obligation, and decide to stay out of trouble, even though it doesn’t ‘feel right.’ You consider the obligation to help gypsies a moral rule, and don’t consider the law against helping gypsies a moral rule. Moreover, you don’t think it would be a moral rule even if you agreed with or endorsed it; you’d just be morally depraved as a result.
Is there anything counter-intuitive about the situation I’ve described? If not, then it seriously problematizes the idea that morality is just ‘social + important,’ or ‘social + praised if good, punished if bad.’ The law is more important to me, or I’d not have prioritized it over my apparent moral obligation. And it’s certainly more important to the Powers That Be. And the relation of praise/punishment to good/bad seems to be reversed here. Your heuristic gets the wrong results, if it’s meant to in any way resemble our ordinary concept of morality.
Their model of the world is identical, so what are they arguing about?
Is it wise to add this assumption in? It doesn’t seem required by the rest of your scenario, and it risks committing you to absurdity; surely if their models were 100% identical, they’d have totally identical beliefs and preferences and life-experiences, hence couldn’t disagree about the rules. It will at least take some doing to make their models identical.
Can you imagine them having a serious debate about what the fictional universe is “actually” like?
Yes, very easily. Fans of works of fiction do this all the time. (They also don’t generally conceptualize orcs and elves as brain processes inside their skulls, incidentally.)
I think it’s much more likely they would argue over what things should be like in order to make an interesting/cool universe than argue over object-level universe properties.
Maybe, but you’re assuming that the act of creation always feels like creation. In many cases, it doesn’t. The word ‘inspiration’ attests to the feeling of something outside yourself supplying you with the new ideas. Ancient mythologists probably felt this way about their creative act of inventing new stories about the gods; they weren’t all just bullshitting, some of them genuinely thought that the gods were communing with them via the process of invention. That’s an extreme case, but I think it’s on one end of a continuum of imaginative acts. Invention very frequently feels like discovery. (See, for instance, mathematics.)
I actually like your fictionalist model. I think it’s much more explanatory and general than trying to collapse a lot of disparate behaviors under ‘attitude claims;’ and it has the advantage that claims about fiction clearly aren’t empirical in some sense, whereas claims about attitude seem no less empirical than claims about muons or accordions.
Your heuristic gets the wrong results, if it’s meant to in any way resemble our ordinary concept of morality.
Sure, I’ll accept that.
Is it wise to add this assumption in? It doesn’t seem required by the rest of your scenario, and it risks committing you to absurdity; surely if their models were 100% identical, they’d have totally identical beliefs and preferences and life-experiences, hence couldn’t disagree about the rules. It will at least take some doing to make their models identical.
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
I actually like your fictionalist model.
Thanks.
claims about attitude seem no less empirical than claims about muons or accordions.
Hm, they sure do to me, but based on this thread, maybe not to most people. I guess the anti-virus type approach was a bad one and people really wanted a crispy definition of “empirical claim” all along, eh? Or maybe it’s just a case of differing philosophical intuitions? Sounds like my fiction-based argument might have shifted your intuition some by pointing out that moral rules shared a lot of important characteristics with things you felt clearly weren’t empirical. (Which seems like associative thinking. Maybe this is how most philosophical discourse works?)
What do you think of my post as purely practical advice about which statement endorsements to hack in order to better achieve your preferences? Brushing aside consideration of what exactly constitutes an “empirical claim” and whatnot. (If rationalists should win, maybe our philosophy should be optimized for winning?)
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
Yes, but the two will have identical maps of their own preferences, if I’m understanding your scenario. They might not in fact have the same preferences, but they’ll believe that they do. Brains and minds are parts of the world.
Hm, they sure do to me
Based on what you’re going for, I suspect the right heuristic is not ‘does it convey information about an attitude?’, but rather one of these:
Is its connotation more important and relevant than its denotation?
Does it purely convey factual content by implicature rather than by explicit assertion?
Does it have reasonably well-defined truth-conditions?
Is it saturated, i.e., has its meaning been fully specified or considered, with no ‘gaps’?
If I say “I’m very angry with you,” that’s an empirical claim, just as much as any claim about planetary orbits or cichlid ecology. I can be mistaken about being angry; I can be mistaken about the cause for my anger; I can be mistaken about the nature of anger itself. And although I’m presumably trying to change someone’s behavior if I’ve told him I’m angry with him, that’s not an adequate criterion for ‘empiricalness,’ since we try to change people’s behavior with purely factual statements all the time.
I agree with your suggestion that in disagreements over matters of fact, relatively ‘impersonal’ claims are useful. Don’t restrict your language too much, though; rationalists win, and winning requires that you use rhetoric and honest emotional appeals. I think the idea that normative or attitudinal claims are bad is certainly unreasonable, at least as unreasonable as being squicked out by interrogatives, imperatives, or interjections because they aren’t truth-apt. Most human communication is not, and never has been, and never will be, truth-functional.
So we agree, at a minimum, that moral rules aren’t just ‘social rules.’ They may be a special kind of social rule. To figure that out, first explain to me: What makes a rule ‘social’? Is any rule made up by anyone at all, that pertains to interactions between people, a ‘social rule’? Or is a social rule a rule that’s employed by a whole social group? Or is it a rule that’s accepted as legitimate and binding upon a social group, by some relevant authority or consensus?
Most people don’t think that even frivolous, non-super-serious rules live inside their skulls. Baseball players don’t think baseball is magic, but they also don’t think the rules of baseball are neuronal states. (Whose skulls would the rules get to reside in? Is there a single ruleset spread across lots of brains, or does each brain have its own unique set of baseball rules?)
As for altruism, I share your preferences. So we can isolate the meta-ethical question from the normative one.
This seems like a definitional consideration. Maybe we could skip that stage. What does it matter what counts as a moral rule? My guess: moral rules are “more important” than non-moral rules. What does more important mean in this context? Maybe typical punishments/ostracism for breaking them are higher, or maybe your brain just feels like they’re more important.
Picture two people arguing over whether gays “should” be allowed to marry. Both are perfectly aware of statistics related to preferences for/against gay marriage and all other relevant information. Their model of the world is the same, so what are they arguing about?
Now let’s say there are two grown people collaborating on a fictional universe. One thinks one thing about the universe, and the other thinks another. Can you imagine them having a serious debate about what the fictional universe is “actually” like? I think it’s much more likely they would argue over what things should be like in order to make an interesting/cool universe than have an object-level argument over universe properties.
The rules of marriage are fictional like a fictional universe. In some cases, people advance very serious arguments about the “truth” of things that are fictional. This is very common for social rules/morality. I label these “attitude claims” in my post.
Suppose you’re living in WW2-era Germany, and you learn of a law against helping gypsies. You see a gypsy in need, and come to the conclusion that you’re morally obliged to help that gypsy; but you shirk your felt obligation, and decide to stay out of trouble, even though it doesn’t ‘feel right.’ You consider the obligation to help gypsies a moral rule, and don’t consider the law against helping gypsies a moral rule. Moreover, you don’t think it would be a moral rule even if you agreed with or endorsed it; you’d just be morally depraved as a result.
Is there anything counter-intuitive about the situation I’ve described? If not, then it seriously problematizes the idea that morality is just ‘social + important,’ or ‘social + praised if good, punished if bad.’ The law is more important to me, or I’d not have prioritized it over my apparent moral obligation. And it’s certainly more important to the Powers That Be. And the relation of praise/punishment to good/bad seems to be reversed here. Your heuristic gets the wrong results, if it’s meant to in any way resemble our ordinary concept of morality.
Is it wise to add this assumption in? It doesn’t seem required by the rest of your scenario, and it risks committing you to absurdity; surely if their models were 100% identical, they’d have totally identical beliefs and preferences and life-experiences, hence couldn’t disagree about the rules. It will at least take some doing to make their models identical.
Yes, very easily. Fans of works of fiction do this all the time. (They also don’t generally conceptualize orcs and elves as brain processes inside their skulls, incidentally.)
Maybe, but you’re assuming that the act of creation always feels like creation. In many cases, it doesn’t. The word ‘inspiration’ attests to the feeling of something outside yourself supplying you with the new ideas. Ancient mythologists probably felt this way about their creative act of inventing new stories about the gods; they weren’t all just bullshitting, some of them genuinely thought that the gods were communing with them via the process of invention. That’s an extreme case, but I think it’s on one end of a continuum of imaginative acts. Invention very frequently feels like discovery. (See, for instance, mathematics.)
I actually like your fictionalist model. I think it’s much more explanatory and general than trying to collapse a lot of disparate behaviors under ‘attitude claims;’ and it has the advantage that claims about fiction clearly aren’t empirical in some sense, whereas claims about attitude seem no less empirical than claims about muons or accordions.
Sure, I’ll accept that.
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
Thanks.
Hm, they sure do to me, but based on this thread, maybe not to most people. I guess the anti-virus type approach was a bad one and people really wanted a crispy definition of “empirical claim” all along, eh? Or maybe it’s just a case of differing philosophical intuitions? Sounds like my fiction-based argument might have shifted your intuition some by pointing out that moral rules shared a lot of important characteristics with things you felt clearly weren’t empirical. (Which seems like associative thinking. Maybe this is how most philosophical discourse works?)
What do you think of my post as purely practical advice about which statement endorsements to hack in order to better achieve your preferences? Brushing aside consideration of what exactly constitutes an “empirical claim” and whatnot. (If rationalists should win, maybe our philosophy should be optimized for winning?)
Yes, but the two will have identical maps of their own preferences, if I’m understanding your scenario. They might not in fact have the same preferences, but they’ll believe that they do. Brains and minds are parts of the world.
Based on what you’re going for, I suspect the right heuristic is not ‘does it convey information about an attitude?’, but rather one of these:
Is its connotation more important and relevant than its denotation?
Does it purely convey factual content by implicature rather than by explicit assertion?
Does it have reasonably well-defined truth-conditions?
Is it saturated, i.e., has its meaning been fully specified or considered, with no ‘gaps’?
If I say “I’m very angry with you,” that’s an empirical claim, just as much as any claim about planetary orbits or cichlid ecology. I can be mistaken about being angry; I can be mistaken about the cause for my anger; I can be mistaken about the nature of anger itself. And although I’m presumably trying to change someone’s behavior if I’ve told him I’m angry with him, that’s not an adequate criterion for ‘empiricalness,’ since we try to change people’s behavior with purely factual statements all the time.
I agree with your suggestion that in disagreements over matters of fact, relatively ‘impersonal’ claims are useful. Don’t restrict your language too much, though; rationalists win, and winning requires that you use rhetoric and honest emotional appeals. I think the idea that normative or attitudinal claims are bad is certainly unreasonable, at least as unreasonable as being squicked out by interrogatives, imperatives, or interjections because they aren’t truth-apt. Most human communication is not, and never has been, and never will be, truth-functional.