Your heuristic gets the wrong results, if it’s meant to in any way resemble our ordinary concept of morality.
Sure, I’ll accept that.
Is it wise to add this assumption in? It doesn’t seem required by the rest of your scenario, and it risks committing you to absurdity; surely if their models were 100% identical, they’d have totally identical beliefs and preferences and life-experiences, hence couldn’t disagree about the rules. It will at least take some doing to make their models identical.
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
I actually like your fictionalist model.
Thanks.
claims about attitude seem no less empirical than claims about muons or accordions.
Hm, they sure do to me, but based on this thread, maybe not to most people. I guess the anti-virus type approach was a bad one and people really wanted a crispy definition of “empirical claim” all along, eh? Or maybe it’s just a case of differing philosophical intuitions? Sounds like my fiction-based argument might have shifted your intuition some by pointing out that moral rules shared a lot of important characteristics with things you felt clearly weren’t empirical. (Which seems like associative thinking. Maybe this is how most philosophical discourse works?)
What do you think of my post as purely practical advice about which statement endorsements to hack in order to better achieve your preferences? Brushing aside consideration of what exactly constitutes an “empirical claim” and whatnot. (If rationalists should win, maybe our philosophy should be optimized for winning?)
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
Yes, but the two will have identical maps of their own preferences, if I’m understanding your scenario. They might not in fact have the same preferences, but they’ll believe that they do. Brains and minds are parts of the world.
Hm, they sure do to me
Based on what you’re going for, I suspect the right heuristic is not ‘does it convey information about an attitude?’, but rather one of these:
Is its connotation more important and relevant than its denotation?
Does it purely convey factual content by implicature rather than by explicit assertion?
Does it have reasonably well-defined truth-conditions?
Is it saturated, i.e., has its meaning been fully specified or considered, with no ‘gaps’?
If I say “I’m very angry with you,” that’s an empirical claim, just as much as any claim about planetary orbits or cichlid ecology. I can be mistaken about being angry; I can be mistaken about the cause for my anger; I can be mistaken about the nature of anger itself. And although I’m presumably trying to change someone’s behavior if I’ve told him I’m angry with him, that’s not an adequate criterion for ‘empiricalness,’ since we try to change people’s behavior with purely factual statements all the time.
I agree with your suggestion that in disagreements over matters of fact, relatively ‘impersonal’ claims are useful. Don’t restrict your language too much, though; rationalists win, and winning requires that you use rhetoric and honest emotional appeals. I think the idea that normative or attitudinal claims are bad is certainly unreasonable, at least as unreasonable as being squicked out by interrogatives, imperatives, or interjections because they aren’t truth-apt. Most human communication is not, and never has been, and never will be, truth-functional.
Sure, I’ll accept that.
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
Thanks.
Hm, they sure do to me, but based on this thread, maybe not to most people. I guess the anti-virus type approach was a bad one and people really wanted a crispy definition of “empirical claim” all along, eh? Or maybe it’s just a case of differing philosophical intuitions? Sounds like my fiction-based argument might have shifted your intuition some by pointing out that moral rules shared a lot of important characteristics with things you felt clearly weren’t empirical. (Which seems like associative thinking. Maybe this is how most philosophical discourse works?)
What do you think of my post as purely practical advice about which statement endorsements to hack in order to better achieve your preferences? Brushing aside consideration of what exactly constitutes an “empirical claim” and whatnot. (If rationalists should win, maybe our philosophy should be optimized for winning?)
Yes, but the two will have identical maps of their own preferences, if I’m understanding your scenario. They might not in fact have the same preferences, but they’ll believe that they do. Brains and minds are parts of the world.
Based on what you’re going for, I suspect the right heuristic is not ‘does it convey information about an attitude?’, but rather one of these:
Is its connotation more important and relevant than its denotation?
Does it purely convey factual content by implicature rather than by explicit assertion?
Does it have reasonably well-defined truth-conditions?
Is it saturated, i.e., has its meaning been fully specified or considered, with no ‘gaps’?
If I say “I’m very angry with you,” that’s an empirical claim, just as much as any claim about planetary orbits or cichlid ecology. I can be mistaken about being angry; I can be mistaken about the cause for my anger; I can be mistaken about the nature of anger itself. And although I’m presumably trying to change someone’s behavior if I’ve told him I’m angry with him, that’s not an adequate criterion for ‘empiricalness,’ since we try to change people’s behavior with purely factual statements all the time.
I agree with your suggestion that in disagreements over matters of fact, relatively ‘impersonal’ claims are useful. Don’t restrict your language too much, though; rationalists win, and winning requires that you use rhetoric and honest emotional appeals. I think the idea that normative or attitudinal claims are bad is certainly unreasonable, at least as unreasonable as being squicked out by interrogatives, imperatives, or interjections because they aren’t truth-apt. Most human communication is not, and never has been, and never will be, truth-functional.