First, EY makes it abundantlyclear that two agents can have a fundamental disagreement on values– it’s just not the best (or most helpful) assumption when you’re talking about two sane human beings with a vast sea of common frameworks and heuristics.
Secondly, I’m worried about what you’re trying to do with words when you suggest we “take them to be unreasonable to intrinsically desire the eating of babies”.
If you’re making an empirical claim that an alien with fundamentally different terminal values will (say) be uninterested in negotiating mutually beneficial deals, or will make patently suboptimal decisions by its own criteria, or exhibit some other characteristic of what we mean by “unreasonable”, then you’d need some strong evidence for that claim.
If instead you openly redefine “reasonable” to include “shares our fundamental moral standards”, then the property
it is essential to the concept of morality that it involves shared standards common to all fully reasonable agents
becomes a tautology which no longer excludes “meta-semantic subjectivism”, as you put it. So I’m puzzled what you mean.
Talking past each other a bit here. Let me try again.
EY makes it abundantly clear that two agents can have a fundamental disagreement on values
EY allows for disagreement in attitude: you might want one thing, while the babyeaters want something different. Of course I’m not charging him with being unable to accommodate this. The objection is instead that he’s unable to accommodate disagreement in moral judgment (at the fundamental level). Normativity as mere semantics, and all that.
Your second point rests on a false dichotomy. I’m not making an empirical claim, but nor am I merely defining the word “reasonable”. Rather, I’m making a substantive normative (non-empirical) hypothesis about which things are reasonable. If you can’t make sense of the idea of a substantive non-empirical issue, you may have fallen victim to scientism.
First, EY makes it abundantly clear that two agents can have a fundamental disagreement on values– it’s just not the best (or most helpful) assumption when you’re talking about two sane human beings with a vast sea of common frameworks and heuristics.
Secondly, I’m worried about what you’re trying to do with words when you suggest we “take them to be unreasonable to intrinsically desire the eating of babies”.
If you’re making an empirical claim that an alien with fundamentally different terminal values will (say) be uninterested in negotiating mutually beneficial deals, or will make patently suboptimal decisions by its own criteria, or exhibit some other characteristic of what we mean by “unreasonable”, then you’d need some strong evidence for that claim.
If instead you openly redefine “reasonable” to include “shares our fundamental moral standards”, then the property
becomes a tautology which no longer excludes “meta-semantic subjectivism”, as you put it. So I’m puzzled what you mean.
Talking past each other a bit here. Let me try again.
EY allows for disagreement in attitude: you might want one thing, while the babyeaters want something different. Of course I’m not charging him with being unable to accommodate this. The objection is instead that he’s unable to accommodate disagreement in moral judgment (at the fundamental level). Normativity as mere semantics, and all that.
Your second point rests on a false dichotomy. I’m not making an empirical claim, but nor am I merely defining the word “reasonable”. Rather, I’m making a substantive normative (non-empirical) hypothesis about which things are reasonable. If you can’t make sense of the idea of a substantive non-empirical issue, you may have fallen victim to scientism.