Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values—and possibly in quite different domains as well (politics, aesthetics, gardening)?
Yes.
What I was getting at is that this looks like complete moral relativism -‘right for me’ is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people’s values differ). I’m understanding that you’re willing to bite this bullet.
I think that’s generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.
I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we’re talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.
No, I wouldn’t say that. It would be a little odd to say anyone who doesn’t hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty.
This is fair.
Instead, I would affirm:
In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
This is an interesting proposal, but I’m not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn’t a rational person always try to believe what is correct? Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like
*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
or, more plausibly,
*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.
But if this is fair I’m back to wondering where the ought comes from.
Taking your thoughts out of order,
What I was getting at is that this looks like complete moral relativism -‘right for me’ is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people’s values differ). I’m understanding that you’re willing to bite this bullet.
I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we’re talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.
This is fair.
This is an interesting proposal, but I’m not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn’t a rational person always try to believe what is correct? Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like
*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
or, more plausibly,
*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.
But if this is fair I’m back to wondering where the ought comes from.