I feel like your austere meta-ethicist is mostly missing the point. It’s utterly routine for different people to have conflicting beliefs about whether a given act is moral*. And often they can have a useful discussion, at the end of which one or both participants change their beliefs. These conversations can happen without the participants changing their definitions of words like ‘moral’, and often without them having a clear definition at all.
[This is my first LW comment—if I do something wrong, please bear with me]
This suggests that precise definitions or agreement about definitions isn’t all that critical. But it’s sometimes useful to be able to reason from stipulated and mutually agreed definitions, in which case meta-ethical speculation and reasoning is doing useful work if it offers a menu of crisp, useful, definitions that can be used in discussion of specific moral claims. Relatedly, it’s also doing useful work by offering a set of definitions that help people conceptualize and articulate their personal feelings about morality, even absent a concrete first-order question.
And part of what goes into picking definitions is to understand their consequences. A philosopher is doing useful work for me if he shows me that a tempting-sounding definition of ‘morality’ doesn’t pick out the set of things I want it to pick out, or that some other definition turns out not to refer to any clear set at all.
Many mathematical entities have multiple logically equivalent definitions, that are of different utility in different contexts. (E.g., sometimes I want to think about a circle as a locus of points, and sometimes as the solution set to an equation.) In the real world, something similar happens.
When I discuss, say, abortion, with somebody, probably there are multiple working definitions of ‘moral’ that could be mutually agreed upon for the purpose of the conversation, and the underlying dispute would still be nontrivial and intelligible. But some definitions might be more directly applicable to the discussion—and philosophical reasoning might be helpful in figuring out what the consequences of various definitions are. For instance, a non-cognitive strikes me intuitively as less likely to be useful—but I’d be open to an argument showing how it could be useful in a debate.
Probably a great deal of academic writing on meta-ethics is low value. But that’s true of most writing on most topics and doesn’t show that the topic is pointless. (With academics being major offenders, but not the only offenders.)
*I’m thinking of the individual personal changes in belief that went along with increased opposition to official racism in America over the course of the 20th century. Or opposition to slavery in the 19th.
Having had time to mull over—I think here’s something about your post that bothers me. I don’t think it’s possible to pinpoint a single sentence, but here are two things that don’t quite satisfy me.
1) Neither your austere or empathetic meta-ethicists seem to be telling me anything I wanted to hear. What I want is a “linguistic meta-ethicist”, who will tell me what other competent speakers of English mean when they use “moral” and suchlike terms. I understand that different people mean different things, and I’m fine with an answer which comes in several parts, and with notes about which speakers are primarily using which of those possible definitions.
What I don’t want is a brain scan from each person I talk to—I want an explanation that’s short and accessible enough to be useful in conversations. Conventional ethics and meta-ethics has given a bunch of useful definitions. Saying “well, it depends” seems unnecessarily cautious; saying “let’s decode your brain” seems excessive for practical purposes.
2) Most of the conversations I’m in that involve terms like “moral” would be only slightly advanced by having explicit definitions—and often the straightforward terms to use instead of “moral” are very nearly as contentious or nebulous. In your own examples, you have your participants talk about “well-being” and “non-moral goodness.” I don’t think that’s a significant step forward. That’s just hiding morality inside the notion of “a good life”—which is a sensible thing to say, but people have been saying it since Plato, and it’s an approach that has problems of its own.
By the way, I do understand that I may not have been your target audience, and that the whole series of posts has been carefully phrased and well organized, and I appreciate that.
I would think that the Hypothetical Imperatives are useful there. You can thus break down your own opinions into material of the form:
“If the set X of imperative premises holds, and the set Y of factual premises holds, then logic Z dictates that further actions W are imperative.
“I hold X already, and I can convince logic Z of the factual truth of Y, thus I believe W to be imperative.”
Even all those complete bastards who disagree with your X can thus come to an agreement with you about the hypothetical as a whole, provided they are epistemically rational. Having isolated the area of disagreement to X, Y, or Z, you can then proceed to argue about it.
Your linguistic metaethicist sounds like the standard philosopher doing conceptual analysis. Did you see my post on ‘Conceptual Analysis and Moral Theory’?
I think conversations using moral terms would be greatly advanced by first defining the terms of the debate, as Aristotle suggested. Also, the reason ‘well-being’ or ‘non-moral goodness’ are not unpacked is because I was giving brief examples. You’ll notice the austere metaethicist said things like “assuming we have the same reduction of well-being in mind...” I just don’t have the space to offer such reductions in what is already a long post.
I would find it helpful—and I think several of the other posters here would as well—to see one reduction on some nontrivial question carried far enough for us to see that the process can be made to work. If I understand right, your approach requires that speakers, or at least many speakers much of the time, can reduce from disputed, loaded, moral terms to reasonably well-defined and fact-based terminology. That’s the point I’d most like to see you spend your space budget on in future posts.
Definitions are good. Precise definitions are usually better than loose definitions. But I suspect that in this context, loose definitions are basically good enough and that there isn’t a lot of value to be extracted by increased precision there. I would like evidence that improving our definitions is a fruitful place to spend effort.
I did read your post on conceptual analysis. I just re-read it. And I’m not convinced that the practice of conceptual analysis is any more broken than most of what people get paid to do in the humanities and social sciences . My sense is that the standard textbook definitions are basically fine, and that the ongoing work in the field is mostly just people trying to get tenure and show off their cleverness.
I don’t see that there’s anything terribly wrong with the practice of conceptual analysis—so long as we don’t mistake an approximate and tentative linguistic exercise for access to any sort of deep truth.
I don’t think many speakers actually have an explicit ought-reduction in mind when they make ought claims. Perhaps most speakers actually have little idea what they mean when they use ought terms. For these people, emotivism may roughly describe speech acts involving oughts.
Rather, I’m imagining a scenario where person A asks what they ought to do, and person B has to clarify the meaning of A’s question before B can give an answer. At this point, A is probably forced to clarify the meaning of their ought terms more thoroughly than they have previously done. But if they can’t do so, then they haven’t asked a meaningful question, and B can’t answer the question as given.
I would like evidence that improving our definitions is a fruitful place to spend effort.
Why? What I’ve been saying the whole time is that improving our definitions isn’t worth as much effort as philosophers are expending on it.
I’m not convinced that the practice of conceptual analysis is any more broken than most of what people get paid to do in the humanities and social sciences.
On this, we agree. That’s why conceptual analysis isn’t very valuable, along with “most of what people get paid to do in the humanities and social sciences.” (Well, depending on where you draw the boundary around the term ‘social sciences.’)
I don’t see that there’s anything terribly wrong with the practice of conceptual analysis...
Do you see something wrong with the way Barry and Albert were arguing about the meaning of ‘sound’ in Conceptual Analysis and Moral Theory? I’m especially thinking of the part about microphones and aliens.
I agree that emotivism is an accurate description, much of the time, for what people mean when they make value judgments. I would also agree that most people don’t have a specific or precise definition in mind. But emotivism isn’t the only description and for practical purposes it’s often not the most useful. Among other things, we have to specify which emotion we are talking about. Not all disgust is moral disgust.
Value judgments show up routinely in law and in daily life. It would be an enormous, difficult, and probably low-value task to rewrite our legal code to avoid terms like “good cause”, “unjust enrichment”, “unconscionable contract”, and the like. Given that we’re stuck with moral language, it’s a useful project to pull out some definitions to help focus discourse slightly. But we aren’t going to be able to eliminate them. “Morality” and its cousins are too expensive to taboo.
We want law and social standards to be somewhat loosely defined, to avoid unscrupulous actors trying to worm their way through loopholes. We don’t want to be overly precise and narrow in our definitions—we want to leverage the judgement of judges and juries. But conversely, we do want to give them guidance about what we mean by those words. And precedent supplies one sort of guidance, and some definitions give them an additional sort of guidance.
I suspect it would be quite hard to pick out precisely what we as a society mean when we use those terms in the legal code—and very hard to reduce them to any sort of concrete physical description that would still be human-intelligible. I would be interested to see a counterexample if you can supply one easily.
I have the sense that trying to talk about human judgement and society without moral language would be about like trying to discuss computer science purely in terms of the hardware—possible, but unnecessarily cumbersome.
One of the common pathologies of the academy is that somebody comes up with a bright idea or a powerful intellectual tool. Researchers then spend several years applying that tool to increasingly diverse contexts, often where the marginal return from the tool is near-zero. Just because conceptual analysis is being over-used doesn’t mean that it is always useless! The first few uses of it may indeed have been fairly high-value in aiding us in communicating. The fact that the tool is then overused isn’t a reason to ignore it.
Endless wrangles about definitions, I think are necessarily low-value. Working out a few useful definitions or explanations for a common term can be valuable, though—particularly if we are going to apply those terms in a quasi-formal setting, like law.
It’s utterly routine for different people to have conflicting beliefs about whether a given act is moral*. And often they can have a useful discussion, at the end of which one or both participants change their beliefs. These conversations can happen without the participants changing their definitions of words like ‘moral’, and often without them having a clear definition at all.
It may be routine in the sense that it often happens, but not routine in the sense that this is a reliable approach to settling moral differences. Often such disputes are not settled despite extensive discussions and no obvious disagreement about other kinds of facts.
This can be explained if individuals are basing their judgments off differing sets of values that partially overlap. Even if both participants are naively assuming their own set of values is the set of moral values, the fact of overlapping will sometimes mean non-moral considerations which are significant to one’s values will also be significant for the other’s values. Other times, this won’t be the case.
For example, many pro-lifers naively assume that everyone places very high value on all human organisms, so they spend a lot of time arguing that an embryo or fetus is a distinct human organism. Anyone who is undecided or pro-choice who shares this value but wasn’t aware of the biological evidence that unborn humans are distinct organisms from their mothers may be swayed by such considerations.
On the other hand, many pro-choicers simply do not place equally high value on all human organisms, without counting other properties like sentience. Or — following Judith Jarvis Thomson in “A Defense of Abortion” — they may place equally high value on all human organisms, but place even greater value on the sort of bodily autonomy denied by laws against abortion.
Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.
I agree with all the claims you’re making about morality and about moral discussion. But I don’t quite see where any of this is giving me any new insights or tools. Sure, people have different but often overlapping values. I knew that. I think most adults who ever have conversations about morality know that. And we know that without worrying too much about the definition of morality and related words.
But I think everything you’ve said is also true about personal taste in non moral questions. I and my friends have different but overlapping taste in music, because we have distinct but overlapping set of desiderata for what we listen to. And sometimes, people get convinced to like something they previously didn’t. I want a meta-ethics that gives me some comparative advantage in dealing with moral problems, as compared to other sorts of disagreements. I had assumed that lukeprog was trying to say something specifically about morality, not just give a general and informal account of human motivation, values, and preferences.
Thus far, this sequence feels like a lot of buildup and groundwork that is true but mostly not in much dispute and mostly doesn’t seem to help me accomplish anything. Perhaps my previous comment should just have been a gentle nudge to lukeprog to get to the point.
I want a meta-ethics that gives me some comparative advantage in dealing with moral problems, as compared to other sorts of disagreements.
This may be a case where not getting it wrong is the main point, even if getting it right is a let down.
My own view is quite similar to Luke’s and I find it useful when I hear a moral clam to try sorting out how much of the claim is value-expression and how much is about what needs to be done to promote values. Even if you don’t agree about values, it still helps to figure out what someone else’s fundamental values are and argue that what they’re advocating is out of line with their own values. People tend to be mistaken about how to fulfill their own values more than they are about how to fulfill their own taste in music.
People tend to be mistaken about how to fulfill their own values more than they are about how to fulfill their own taste in music.
Yes.
That is why I can interrogate what somebody means by ‘ought’ and then often show that by their own definition of ought, what they thought they ‘ought’ to do is not what they ‘ought’ to do.
It may be routine in the sense that it often happens, but not routine in the sense that this is a reliable approach to settling moral differences.
Do you know of anything better?
Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.
OTOH, the problem remains that people act on their values, and that one persons actions can affect another person. Pluralistic morality is terrible at translating into a uniform set of rules
that all are beholden to.
Pluralistic morality is terrible at translating into a uniform set of rules that all are beholden to.
Why is that the test of a metaethical theory rather than the theory which best explains moral discourse? Categorical imperatives — if that’s what you’re referring to — are one answer to the best explanation of moral discourse, but then we’re stuck showing how categorical imperatives can hold...or accepting error theory.
Perhaps ‘referring to categorical imperatives’ is not the only or even the best explanation of moral discourse. See “The Error in the Error Theory” by Stephen Finlay.
Why is that the test of a metaethical theory rather than the theory which best explains moral discourse?
Because there is a practical aspect to ethics. Moral discourse involves the idea
that people should do the obligatory and refrain from the forbidden. -- irrespective
of who they are. That needs explaining as well.
Uh-huh. Is that an issue of commission rather than omission? Are people not obligated to refrain from theft murder and rape , their inclinations notwithstanding?
If by ‘obligated’ you mean it’s demanded by those who fear being the targets of those actions, yes. Or if you mean exercising restraint may be practically necessary to comply with certain values those actions thwart, yes. Or if you mean doing those things is likely to result in legal penalties, that’s often the case.
But if you mean it’s some simple fact that we’re morally obligated to restrain ourselves from doing certain things, no. Or at least I don’t see how that could even possibly be the case, and I already have a theory that explains why people might mistakenly think such a thing is the case (they mistake their own values for facts woven into the universe, so hypothetical imperatives look like categorical imperatives to them).
The ‘commission’ vs. ‘omission’ thing is often a matter of wording. Rape can be viewed as omitting to get proper permission, particularly when we’re talking about drugging, etc.
But if you mean it’s some simple fact that we’re morally obligated to restrain ourselves from doing certain things, no. Or at least I don’t see how that could even possibly be the case, and I already have a theory that explains why people might mistakenly think such a thing is the case (they mistake their own values for facts woven into the universe, so hypothetical imperatives look like categorical imperatives to them).
Well, I have a theory about how it could be the case. Objective morality doesn’;t have to be a fact-like thing that is paradoxically indetectable. It could be based on the other source of objectivity: logic and reason. It’s an analytical truth that you shouldn’t do to others what you wouldn’t want done to yourself. You are obliged to be moral so long as you can reason morally in the sense that you will be held responsible.
It’s an analytical truth that you shouldn’t do to others what you wouldn’t want done to yourself.
I’m skeptical that this statement is true, let alone an analytic truth. Different people have different desires. I take the Golden Rule to be a valuable heuristic, but no more than that.
What is your reason for believing that it is true as an absolute rule?
Just to clarify where you stand on norms:
Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)
To stick my own neck out: I am a realist about values. I think there are facts about what we ought to believe and do. I think you have to be, to capture mathematical facts. This step taken, there’s no further commitment required to get ethical facts. Obviously, though, there are epistemic issues associated with the latter which are not associated with the former.
Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.
Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values—and possibly in quite different domains as well (politics, aesthetics, gardening)?
You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?
To stick my own neck out: I am a realist about values. I think there are facts about what we ought to believe and do. I think you have to be, to capture mathematical facts.
Facts as in true statements, or facts as in states-of-affiairs?
Facts in the disappointingly deflationary sense that
It is a fact that P if and only if P (and that’s all there is to say about facthood).
This position is a little underwhelming to any who seek a metaphysically substantive account of what makes things true, but it is a realist stance all the same (no?). If you have strong arguments against this or for an alternative, I’m interested to hear.
Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?) Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)
No, I wouldn’t say that. It would be a little odd to say anyone who doesn’t hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty. Instead, I would affirm:
In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
(I’m leaving ‘mathematically correct’ vague so different views on the nature of math are accommodated.)
In other words, the obligation relies on a goal. Or we could say normative answers require questions. Sometimes the implied question is so obvious, it seems strange to bother identifying it.
Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values—and possibly in quite different domains as well (politics, aesthetics, gardening)?
Yes.
You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?
I think that’s generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.
Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values—and possibly in quite different domains as well (politics, aesthetics, gardening)?
Yes.
What I was getting at is that this looks like complete moral relativism -‘right for me’ is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people’s values differ). I’m understanding that you’re willing to bite this bullet.
I think that’s generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.
I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we’re talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.
No, I wouldn’t say that. It would be a little odd to say anyone who doesn’t hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty.
This is fair.
Instead, I would affirm:
In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
This is an interesting proposal, but I’m not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn’t a rational person always try to believe what is correct? Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like
*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
or, more plausibly,
*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.
But if this is fair I’m back to wondering where the ought comes from.
What I was getting at is that this looks like complete moral relativism -‘right for me’ is the only right there is
While it is relativism, the focus is a bit different from ‘right for me.’ More like ‘this action measures up as right against standard Y’ where this Y is typically something I endorse.
For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving ‘the well-being of conscious creatures,’ then there’s a bit more going on than it just being right for you and me.
Or if I consider a practice morally right for the above reason, but you consider it morally wrong because it falls afoul of Rawls’ theory of justice, there’s more going on than it just being right for me and wrong for you. It’s more like I’m saying it’s right{Harris standard} and you’re saying it’s wrong{Rawls standard}. (...at least as far as cognitive content is concerned; we would usually also be expressing an expectation that others adhere to the standards we support.)
Of course the above are toy examples, since people’s values don’t tend to line up neatly with the simplifications of philosophers.
(since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people’s values differ).
It’s not apparent that values differ just because judgments differ, so there’s still a lot of interesting work to find out if disagreements can be explained by differing descriptive beliefs. But, yes, once a disagreement is known to result from a pure difference in values, there isn’t a rational way to resolve it. It’s like Luke’s ‘tree falling’ example; once we know two people are using different definitions of ‘sound,’ the best we can do is make people aware of the difference in their claims.
I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?).
Yep. While those are interesting standards to consider, it’s pretty clear to me that real world moral discourse is wider and more messy than any one normative theory. We can simply declare a normative theory as the moral standard — plenty of people have! — but the next person whose values are a better match for another normative theory is just going to disagree. On what basis do we find that one normative theory is correct when, descriptively, moral pluralism seems to characterize moral discourse?
Is it possible for a rational person to strive to believe anything but the truth?
If being rational consists in doing what it takes to fulfill one’s goals (I don’t know what the popular definition of ‘rationality’ is on this site), then it is still possible to be rational while holding a false belief, if a false belief helps fulfill one’s goals.
Now typically, false beliefs are unhelpful in this way, but I know at least Sinnott-Armstrong has talked about an ‘instrumentally justified’ belief that can go counter to having a true belief. The example I’ve used before is an Atheist married to a Theist whose goal of having a happy marriage would in fact go better if she could take a belief-altering pill so she would falsely take on her spouse’s belief in God.
Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? [...] But if this is fair I’m back to wondering where the ought comes from.
Perhaps it comes from the way you view the concept of belief as implying a goal?
At risk of triggering the political mind-killer, I think there are some potentially problematic consequences of this view.
Once a disagreement is known to result from a pure difference in values, there isn’t a rational way to resolve it...the best we can do is make people aware of the difference in their claims.
Suppose we don’t have good grounds for keeping one set of moral beliefs over another. Now suppose somebody offers to reward us for changing our views, or punish us for not changing. Should we change our views?
To go from the philosophical to the concrete: There are people in the world who are fanatics who are largely committed to some reading of the Bible/Koran/Little Green Book of Colonel Gaddafi/juche ideology of the Great Leader/whatever. Some of those people have armies and nuclear weapons. They can bring quite a lot of pressure to bear on other individuals to change their views to resemble those of the fanatic.
If rationalism can’t supply powerful reasons to maintain a non-fanatical worldview in the face of pressure to self-modify, that’s an objection to rationalism. Conversely,
altering the moral beliefs of fanatics with access to nuclear weapons strikes me as an extremely important practical project. I suspect similar considerations will apply if you consider powerful unfriendly powerful AIs.
This reminds me of that line of Yeats, that “the best lack all conviction, while the worst are full of passionate intensity.” Ideological differences sometimes culminate in wars, and if you want to win those wars, you may need something better than “we have our morals and they have theirs.”
To sharpen the point slightly: There’s an asymmetry between the rationalists and the fanatics, which is that the rationalists are aware that they don’t have a rational justification for their terminal values, but the fanatic does have a [fanatical] justification. Worse, the fanatic has a justification to taboo thinking about the problem, and the rationalist doesn’t.
Just because morality is personal doesn’t make it not real. If you model people as agents with utility functions, the reason not to change is obvious—if you change, you won’t do all the things you value. Non-fanatics can do that the same as fanatics.
The difference comes when you factor in human irrationality. And sure, fanatics might resist where anyone sane would give in. “We will blow up this city unless you renounce the Leader,” something like that. But on the other hand, rational humans might resist techniques that play on human irrationality, where fanatics might even be more susceptible than average. Good cop / bad cop for example.
What about on a national scale, where, say, an evil mastermind threatens to nuke every nation that does not start worshiping the flying spaghetti monster? Well, what a rational society would do is compare benefits and downsides, and worship His Noodliness if it was worth it. Fanatics would get nuked. I fail to see how this is an argument for why we shouldn’t be rational.
if you want to win those wars, you may need something better than “we have our morals and they have theirs.”
And that’s why Strawmansylvania has never won a single battle, I agree. Just because morality is personal doesn’t make it unmomving.
Just because morality is personal doesn’t make it not real. If you model people as agents with utility functions, the reason not to change is obvious—if you change, you won’t do all the things you value.
Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?
That doesn’t seem right either. Somehow, an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs in response to events. There’s something badly wrong with a theory that can’t distinguish those cases.
Also, my apologies if this has been already discussed to death on LW or elsewhere—I spent some time poking and didn’t see anything on this point.
Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?
No, but it sets a high standard—If you value, say, the company of your family, then modifying to not want that (and therefore not spend much time with your family) costs as much as if you were kept away from your family by force for the rest of your life. So any threats have to be pretty damn serious, and maybe not even death would work if you know important secrets or do not highly value living without some key values.
an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs
I wouldn’t call all of those cases of modifying terminal values. From some quick googling (I didn’t know about the Vicar of Bray), what the Vicar of Bray cared about was being the vicar of Bray. What Pierre Laval cared about was being the head of the government and not being killed, maybe. So they’re maybe not good examples of changing terminal values, as opposed to instrumental ones.
Also “improving their moral beliefs as they mature” is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging? It’s really an example of how humans are emphatically not rational agents—we follow a bunch of evolved and cultural rules, which can appear to produce consistent behavior, but really have all these holes and internal conflicts. And things can change suddenly, without the sort of rational deliberation described above.
Also “improving their moral beliefs as they mature” is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging?
You could say the same about “improving our standards of scientific inference.” Circular? Perhaps, but it needn’t be a vicious circle. It’s pretty clear that we’ve accomplished it, so it must be possible.
I would cheerfully agree that humans aren’t rational and routinely change their minds about morality for non-rational reasons.
This is one of the things I was trying to get at. Ask when we should change our minds for non-rational reasons, and when we should attempt to change others’ minds using non-rational means.
The same examples I mentioned above work for these questions too.
Here’s what I had in mind with the reference to the Vicar of Bray. Imagine an individual with two terminal values: “Stay alive and employed” and the reigning orthodoxy at the moment. The individual sincerely believes in both, and whenever they start to conflict, changes their beliefs about the orthodoxy. He is quite sincere in advocating for the ruling ideology at each point in time; he really does believe in divine right of kings, just so long as it’s not a dangerous belief to hold.
The beliefs in question are at least potentially terminal moral beliefs. Without delving deep into the history, let’s stipulate for the purpose of the conversation that we’re talking about a rational actor who has a sequence of terminal moral beliefs about what constitutes a just government, and that these beliefs shift with the political climate.
Now for contrast, let’s consider a hypothetical rational but very selfish child. The child’s parents attempt and succeed in changing the child’s values to be less selfish. They do this by the usual parental tactics of punishment and example-setting, not by rational argument. By your social standard and mine, this is an improvement to the child.
Both the vicar and the child are updating their moral beliefs in response to outside pressure, not rational deliberation. The general consensus is that parents are obligated to bring up their children not to be overly self-centered and that reasoning with children is not a sufficient pedagogic technique But conversely that coercive government pressure on religion is ignoble.
Is this simply that you and I think “a change in moral beliefs, brought about by non-reasonable means is good (all else equal), if it significantly improves the beliefs of the subject by my standards”?
I think the caveats will turn out to matter a lot. One of the things that human moral beliefs do, in practice, is give other humans some reasons to trust you. If I know that you are committed, for non-instrumental reasons, to avoid manipulating* me into changing my values, that gives me reasons to trust you. Conversely, if your moral view is that it’s legitimate to lie to people to make them do what you want, people will trust you less.
Obviously, people have incentives to lie about their true values. I think equally obviously, people are paying attention and looking hard for that sort of hypocrisy.
*This sentence is true for a range of possible expansions of “manipulating”.
My statement was more observational than ideal, though. Sure, a rational agent can be averse to manipulating other people (and humans often are too), because agents can care about whatever they want. But that doesn’t bear very strongly on how the language is used compared to the fact that in real-world usage I see people say things like “improved his morals” by only three standards: consistency, how much society approves, and how much you approve.
I think the worry here is that realizing ‘right’ and ‘wrong’ are relative to values might make us give up our values. Meanwhile, those who aren’t as reflective are able to hold more strongly onto their values.
But let’s look at your deep worry about fanatics with nukes. Does their disregard for life have to also be making some kind of abstract error for you to keep and act on your own strong regard for life?
I think the worry here is that realizing ‘right’ and ‘wrong’ are relative to values might make us give up our values. Meanwhile, those who aren’t as reflective are able to hold more strongly onto their values.
Almost. What I’m worried about is that acknowledging or defining values to be arbitrary makes us less able to hold onto them and less able to convince others to adopt values that are safer for us. I think it’s nearly tautological that right and wrong are defined in terms of values.
The comment about fanatics with nuclear weapons wasn’t to indicate that that’s a particular nightmare of mine. It isn’t. Rather, that was to get at the point that moral philosophy isn’t simply an armchair exercise conducted amongst would-be rationalists—sometimes having a good theory a matter of life and death.
It’s very tempting, if you are firmly attached to your moral beliefs, and skeptical about your powers of rationality (as you should be!) to react to countervailing opinion by not listening. If you want to preserve the overall values of your society, and are skeptical of others’ powers of rational judgement, it’s tempting to have the heretic burnt at the stake, or the philosopher forced to drink the hemlock.
One of the undercurrents in the history of philosophy has been an effort to explain why a prudent society that doesn’t want to lose its moral footings can still allow dissent, including dissent about important values, that risks changing those values to something not obviously better. Philosophers, unsurprisingly, are drawn to philosophies that explain why they should be allowed to keep having their fun. And I think that’s a real and valuable goal that we shouldn’t lose sight of.
I’m willing to sacrifice a bunch of other theoretical properties to hang on to a moral philosophy that explains why we don’t need heresy trials and why nobody needs to bomb us for being infidels.
While it is relativism, the focus is a bit different from ‘right for me.’ More like ‘this action measures up as right against standard Y’ where this Y is typically something I endorse.
I don’t see much difference there. Relativist morality doesn’t have to be selfish
(although the reverse is probably true).
For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving ‘the well-being of conscious creatures,’ then there’s a bit more going on than it just being right for you and me.
OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him—there is no sense in which he ought not to have done what he did (assuming his belief system doesn’t inveigh against him offending yours)?
Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? [...] But if this is fair I’m back to wondering where the ought comes from.
Perhaps it comes from the way you view the concept of belief as implying a goal?
Touche.
Look, what I’m getting at is this. I assume we can agree that
“68 + 57 = 125” is true if and only if 68 + 57 = 125
This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, “Why ought I to believe that 68 + 57 = 125?”, and B answers, “Because it’s true”, then B is not really saying anything beyond, “Because it does”. B does not answer A’s question.
If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn’t be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside—you’ll surely allow this looks pretty dubious at least superficially.
OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him
There’s an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.
Such a person would be objectively afoul of a standard against randomly killing people. But let’s say he acted according to a standard which doesn’t care about that; we wouldn’t be able to tell him he did something wrong by that other standard. Nor could we tell him he did something wrong according to the one, correct standard (since there isn’t one).
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, “Why ought I to believe that 68 + 57 = 125?”, and B answers, “Because it’s true”, then B is not really saying anything beyond, “Because it does”. B does not answer A’s question.
Unless A was just asking to be walked through the calculation steps, then I agree B is not answering A’s question.
But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside—you’ll surely allow this looks pretty dubious at least superficially.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
If so, then I would offer the goal of “in order to be logically consistent.” There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
If so, then I would offer the goal of “in order to be logically consistent.” There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory...
You can stop right there. If no theory of morality based on logical consistency is offered, you don’t have to do any more.
I observe that you didn’t offer a pointer to a theory of morality based on logical consistency.
For one thing, I don’t think logical consistency is quite the right criterion for reason-based objective morality. Pointing out that certain ideas are old and well documented, is offering a pointer, and is not trolling.
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I’m not getting it.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
This states the thought very clearly -thanks.
If so, then I would offer the goal of “in order to be logically consistent.”
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though.
There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
, besides pointing out that its current actions are suboptimal for its goal in the long run?
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to
have a different rationality or just different values?
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
I cannot provide [a murderer] compelling grounds as to why he ought not to have done what he did… [T]o punish him would be arbitrary.
If you don’t want murderers running around killing people, then it’s consistent with your values to set up a situation in which murderers can expect to be punished, and one way to do that is to actually punish murderers.
Yes, that’s arbitrary, in the same sense that every preference you have is arbitrary. If you are going to act upon your preferences without deceiving yourself, you have to feel comfortable with doing arbitrary things.
I think you missed the point quite badly there. The point is that there is no rationally
compelling reason to act on any arbitrary value. You gave the example of punishing murderers, but if every value is equally arbitrary that is no more justifiable than punishing stamp collectors or the left-handed. Having accepted moral subjectivism, you are faced with a choice between acting irrationality or not acting. OTOH, you haven’t exactly given moral objectivism a run for its money.
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
This states the thought very clearly -thanks.
If so, then I would offer the goal of “in order to be logically consistent.”
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though. Some people I know think it’s just foolish.
There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
As is pointed out in the other thread from your post, plausibly our goal in the first instance is to show that it is rational not to kill people.
There’s an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.
I don’t think that works. If you have multiple contradictory judgements being made
by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don’t think you can have multiple contradictory objective truths.
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
It″ll work on people who already subscribe to rationaity, whereas relativism won’t.
What’s contradictory about the same object being judged differently by different standards?
Here’s a standard: return the width of the object in meters.
Here’s another: return the number of wavelengths of blue light that make up the width of the object.
And another: return the number of electrons in the object.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
What’s contradictory about the same object being judged differently by different standards?
Nothing. There’s nothing contradictory about multiple subjective truths or about multiple
opinions, or about a single objective truth. But there is a contradiction in multiple objective truths about morality, as I said.
Here’s a standard: return the width of the object in meters. Here’s another: return the number of wavelengths of blue light that make up the width of the object. And another: return the number of electrons in the object.
There isn’t any contradiction in multiple objective truths about different things; but the
original hypothesis was multiple objective truths about the same thing, ie the morality
of an action. If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.
If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.
The focus doesn’t have to be on John and Mary; it can be on the morality we’re referencing via John and Mary. By analogy, we could talk about John’s hometown and Mary’s hometown, without being subjectivists about the cities we are referencing.
Hmm. Sounds like it would be helpful to taboo “objective” and “subjective”. Or perhaps this is my fault for not being entirely clear.
A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the “judgements” of the standard.
I should mention that this point that I use the word “morality” to indicate a particular standard—the morality-standard—that has the properties we normally associate with morality (“approving” of happiness, “disapproving” of murder, etc). This is the standard I would endorse (by, for example, acting to maximise “good” according to it) were I fully rational and reflectively consistent and non-akrasiac.
So the judgements of other standards are not moral judgements in the sense that they are not statements about the output of this standard. There would indeed be something inconsistent about asserting that other standards made statements about—ie. had the same output as—this standard.
Given that, and assuming your objections about “subjectivity” still exist, what do you mean by “subjective” such that the existence of other standards makes morality “subjective”, and this a problem?
It already seems that you must be resigned to your arguments failing to work on some minds: there is no god that will strike you down if you write a paperclip-maximising AIXI, for example.
A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the “judgements” of the standard.
Yep. Subjective statements about X can be phrased in objectivese. But that doesn’t
make them objective statements about X.
Given that, and assuming your objections about “subjectivity” still exist, what do you mean by “subjective” such that the existence of other standards makes morality “subjective”, and this a problem?
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
It already seems that you must be resigned to your arguments failing to work on some minds:
Of course. But I think moral objectivism is better as an explanation, because
it explains moral praise and blame as something other than a mistake; and
I think moral objectivism is also better in practice because having some
successful persuasion going on is better than having none.
Yep. Subjective statements about X can be phrased in objectivese. But that doesn’t make them objective statements about X.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
A mind, for example an alien, may be constructed such that it always executes the action that maximises the utility of some other standard. This utility function may be taken to be the “values” of the alien.
Moral praise and blame is not a mistake; whether certain actions result in an increase or decrease in the value of the moral utility function is a analytic fact. It is further an analytic fact that praise and blame, correctly applied, increases the output of the moral utility function, and that if we failed to do that, we would therefore fail to do the most moral thing.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
By “subjective” I meant that it is indexed to an individual, and properly
so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is
no further fact that can undermine the truth of that—whereas if
Mary thinks the world is flat, there may be some sense in which
it is flat-for-Mary, but that doens’t count for anything, because the
shape of the world is not something about which Mary has the last word.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
And there is one such standard in the universe, not one per agent?
By “subjective” I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that—whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens’t count for anything, because the shape of the world is not something about which Mary has the last word.
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
And there is one such standard in the universe, not one per agent?
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.
If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don’t think you can have multiple contradictory objective truths.
Ok, instead of meter measurements, let’s look at cubit measurements. Different ancient cultures represented significantly different physical lengths by ‘cubits.’ So a measurement of 10 cubits to a Roman was a different physical distance than 10 cubits to a Babylonian.
A given object could thus be ‘over ten cubits’ and ‘under ten cubits’ at the same time, though in different senses. Likewise, a given action can be ‘right’ and ‘wrong’ at the same time, though in different senses.
The surface judgments contradict, but there need not be any propositional conflict.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
Isn’t this done by appealing to the values of the majority?
It″ll work on people who already subscribe to rationaity, whereas relativism won’t.
Only if — independent of values — certain values are rational and others are not.
Likewise, a given action can be ‘right’ and ‘wrong’ at the same time, though in different senses.
Are you sure that people mean different things by ‘right’ and ‘wrong’, or are they just using different criteria to judge whether something is right or wrong.
Isn’t this done by appealing to the values of the majority?
It’s done by changing the values of the majority..by showing the majority that
they ought (in a rational sense of ought)) think differently. The point being that if
correct reasoning eventually leads to uniform results, we call that objective.
Only if — independent of values — certain values are rational and others are not
Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?
Are you sure that people mean different things by ‘right’ and ‘wrong’, or are they just using different criteria to judge whether something is right or wrong.
What could ‘right’ and ‘wrong’ mean, beyond the criteria used to make the judgment?
It’s done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently.
Sure, if you’re talking about appealing to people to change their non-fundamental values to be more in line with their fundamental values. But I’ve still never heard how reason can have anything to say about fundamental values.
Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?
So far as I can tell, only by reasoning from their pre-existing values.
What could ‘right’ and ‘wrong’ mean, beyond the criteria used to make the judgment?
“Should be rewarded” and “should be punished”. If there was evidence of people
saying that the good should be punished, that would indicate that some people
are disagreeing about the meaning of good/right. Otherwise, disagreements
are about criteria for assigning the term.
So far as I can tell, only by reasoning from their pre-existing values.
But not for all of them (since some of then get discarded) and not
only from moral values (since people need to value reason to be reasoned with).
For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving ‘the well-being of conscious creatures,’ then there’s a bit more going on than it just being right for you and me.
OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him—there is no sense in which he ought not to have done what he did (assuming his belief system doesn’t inveigh against him offending yours)?
Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? [...] But if this is fair I’m back to wondering where the ought comes from.
Perhaps it comes from the way you view the concept of belief as implying a goal?
Touche.
Look, what I’m getting at is this. I assume we can agree that
“68 + 57 = 125” is true if and only if 68 + 57 = 125
This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, “Why ought I to believe that 68 + 57 = 125?”, and B answers, “Because it’s true”, then B is not really saying anything beyond, “Because it does”. B does not answer A’s question.
If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn’t be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside—you’ll surely allow this looks pretty dubious at least superficially.
Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values—and possibly in quite different domains as well (politics, aesthetics, gardening)?
Yes.
What I was getting at is that this looks like complete moral relativism -‘right for me’ is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people’s values differ). I’m understanding that you’re willing to bite this bullet.
I think that’s generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.
I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we’re talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.
No, I wouldn’t say that. It would be a little odd to say anyone who doesn’t hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty.
This is fair.
Instead, I would affirm:
In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
This is an interesting proposal, but I’m not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn’t a rational person always try to believe what is correct? Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like
*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
or, more plausibly,
*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.
But if this is fair I’m back to wondering where the ought comes from.
A philosopher is doing useful work for me if he shows me that a tempting-sounding definition of ‘morality’ doesn’t pick out the set of things I want it to pick out, or that some other definition turns out not to refer to any clear set at all.
That is an important point. People often run on examples as much as or more than
they do on definitions,and if their intuitions about examples are strong, that can be used
to fix their definitions (ie give them revised definitions that serve their intuitions better).
The rest of the post contained good material that needed saying.
I feel like your austere meta-ethicist is mostly missing the point. It’s utterly routine for different people to have conflicting beliefs about whether a given act is moral*. And often they can have a useful discussion, at the end of which one or both participants change their beliefs. These conversations can happen without the participants changing their definitions of words like ‘moral’, and often without them having a clear definition at all.
[This is my first LW comment—if I do something wrong, please bear with me]
This suggests that precise definitions or agreement about definitions isn’t all that critical. But it’s sometimes useful to be able to reason from stipulated and mutually agreed definitions, in which case meta-ethical speculation and reasoning is doing useful work if it offers a menu of crisp, useful, definitions that can be used in discussion of specific moral claims. Relatedly, it’s also doing useful work by offering a set of definitions that help people conceptualize and articulate their personal feelings about morality, even absent a concrete first-order question.
And part of what goes into picking definitions is to understand their consequences. A philosopher is doing useful work for me if he shows me that a tempting-sounding definition of ‘morality’ doesn’t pick out the set of things I want it to pick out, or that some other definition turns out not to refer to any clear set at all.
Many mathematical entities have multiple logically equivalent definitions, that are of different utility in different contexts. (E.g., sometimes I want to think about a circle as a locus of points, and sometimes as the solution set to an equation.) In the real world, something similar happens.
When I discuss, say, abortion, with somebody, probably there are multiple working definitions of ‘moral’ that could be mutually agreed upon for the purpose of the conversation, and the underlying dispute would still be nontrivial and intelligible. But some definitions might be more directly applicable to the discussion—and philosophical reasoning might be helpful in figuring out what the consequences of various definitions are. For instance, a non-cognitive strikes me intuitively as less likely to be useful—but I’d be open to an argument showing how it could be useful in a debate.
Probably a great deal of academic writing on meta-ethics is low value. But that’s true of most writing on most topics and doesn’t show that the topic is pointless. (With academics being major offenders, but not the only offenders.)
*I’m thinking of the individual personal changes in belief that went along with increased opposition to official racism in America over the course of the 20th century. Or opposition to slavery in the 19th.
Welcome to Less Wrong!
Is there a part of your comment that you suspect I disagree with? Or, is there a sentence in my post with which you disagree?
Having had time to mull over—I think here’s something about your post that bothers me. I don’t think it’s possible to pinpoint a single sentence, but here are two things that don’t quite satisfy me.
1) Neither your austere or empathetic meta-ethicists seem to be telling me anything I wanted to hear. What I want is a “linguistic meta-ethicist”, who will tell me what other competent speakers of English mean when they use “moral” and suchlike terms. I understand that different people mean different things, and I’m fine with an answer which comes in several parts, and with notes about which speakers are primarily using which of those possible definitions.
What I don’t want is a brain scan from each person I talk to—I want an explanation that’s short and accessible enough to be useful in conversations. Conventional ethics and meta-ethics has given a bunch of useful definitions. Saying “well, it depends” seems unnecessarily cautious; saying “let’s decode your brain” seems excessive for practical purposes.
2) Most of the conversations I’m in that involve terms like “moral” would be only slightly advanced by having explicit definitions—and often the straightforward terms to use instead of “moral” are very nearly as contentious or nebulous. In your own examples, you have your participants talk about “well-being” and “non-moral goodness.” I don’t think that’s a significant step forward. That’s just hiding morality inside the notion of “a good life”—which is a sensible thing to say, but people have been saying it since Plato, and it’s an approach that has problems of its own.
By the way, I do understand that I may not have been your target audience, and that the whole series of posts has been carefully phrased and well organized, and I appreciate that.
I would think that the Hypothetical Imperatives are useful there. You can thus break down your own opinions into material of the form:
“If the set X of imperative premises holds, and the set Y of factual premises holds, then logic Z dictates that further actions W are imperative.
“I hold X already, and I can convince logic Z of the factual truth of Y, thus I believe W to be imperative.”
Even all those complete bastards who disagree with your X can thus come to an agreement with you about the hypothetical as a whole, provided they are epistemically rational. Having isolated the area of disagreement to X, Y, or Z, you can then proceed to argue about it.
asr,
Your linguistic metaethicist sounds like the standard philosopher doing conceptual analysis. Did you see my post on ‘Conceptual Analysis and Moral Theory’?
I think conversations using moral terms would be greatly advanced by first defining the terms of the debate, as Aristotle suggested. Also, the reason ‘well-being’ or ‘non-moral goodness’ are not unpacked is because I was giving brief examples. You’ll notice the austere metaethicist said things like “assuming we have the same reduction of well-being in mind...” I just don’t have the space to offer such reductions in what is already a long post.
I would find it helpful—and I think several of the other posters here would as well—to see one reduction on some nontrivial question carried far enough for us to see that the process can be made to work. If I understand right, your approach requires that speakers, or at least many speakers much of the time, can reduce from disputed, loaded, moral terms to reasonably well-defined and fact-based terminology. That’s the point I’d most like to see you spend your space budget on in future posts.
Definitions are good. Precise definitions are usually better than loose definitions. But I suspect that in this context, loose definitions are basically good enough and that there isn’t a lot of value to be extracted by increased precision there. I would like evidence that improving our definitions is a fruitful place to spend effort.
I did read your post on conceptual analysis. I just re-read it. And I’m not convinced that the practice of conceptual analysis is any more broken than most of what people get paid to do in the humanities and social sciences . My sense is that the standard textbook definitions are basically fine, and that the ongoing work in the field is mostly just people trying to get tenure and show off their cleverness.
I don’t see that there’s anything terribly wrong with the practice of conceptual analysis—so long as we don’t mistake an approximate and tentative linguistic exercise for access to any sort of deep truth.
I don’t think many speakers actually have an explicit ought-reduction in mind when they make ought claims. Perhaps most speakers actually have little idea what they mean when they use ought terms. For these people, emotivism may roughly describe speech acts involving oughts.
Rather, I’m imagining a scenario where person A asks what they ought to do, and person B has to clarify the meaning of A’s question before B can give an answer. At this point, A is probably forced to clarify the meaning of their ought terms more thoroughly than they have previously done. But if they can’t do so, then they haven’t asked a meaningful question, and B can’t answer the question as given.
Why? What I’ve been saying the whole time is that improving our definitions isn’t worth as much effort as philosophers are expending on it.
On this, we agree. That’s why conceptual analysis isn’t very valuable, along with “most of what people get paid to do in the humanities and social sciences.” (Well, depending on where you draw the boundary around the term ‘social sciences.’)
Do you see something wrong with the way Barry and Albert were arguing about the meaning of ‘sound’ in Conceptual Analysis and Moral Theory? I’m especially thinking of the part about microphones and aliens.
I agree that emotivism is an accurate description, much of the time, for what people mean when they make value judgments. I would also agree that most people don’t have a specific or precise definition in mind. But emotivism isn’t the only description and for practical purposes it’s often not the most useful. Among other things, we have to specify which emotion we are talking about. Not all disgust is moral disgust.
Value judgments show up routinely in law and in daily life. It would be an enormous, difficult, and probably low-value task to rewrite our legal code to avoid terms like “good cause”, “unjust enrichment”, “unconscionable contract”, and the like. Given that we’re stuck with moral language, it’s a useful project to pull out some definitions to help focus discourse slightly. But we aren’t going to be able to eliminate them. “Morality” and its cousins are too expensive to taboo.
We want law and social standards to be somewhat loosely defined, to avoid unscrupulous actors trying to worm their way through loopholes. We don’t want to be overly precise and narrow in our definitions—we want to leverage the judgement of judges and juries. But conversely, we do want to give them guidance about what we mean by those words. And precedent supplies one sort of guidance, and some definitions give them an additional sort of guidance.
I suspect it would be quite hard to pick out precisely what we as a society mean when we use those terms in the legal code—and very hard to reduce them to any sort of concrete physical description that would still be human-intelligible. I would be interested to see a counterexample if you can supply one easily.
I have the sense that trying to talk about human judgement and society without moral language would be about like trying to discuss computer science purely in terms of the hardware—possible, but unnecessarily cumbersome.
One of the common pathologies of the academy is that somebody comes up with a bright idea or a powerful intellectual tool. Researchers then spend several years applying that tool to increasingly diverse contexts, often where the marginal return from the tool is near-zero. Just because conceptual analysis is being over-used doesn’t mean that it is always useless! The first few uses of it may indeed have been fairly high-value in aiding us in communicating. The fact that the tool is then overused isn’t a reason to ignore it.
Endless wrangles about definitions, I think are necessarily low-value. Working out a few useful definitions or explanations for a common term can be valuable, though—particularly if we are going to apply those terms in a quasi-formal setting, like law.
It may be routine in the sense that it often happens, but not routine in the sense that this is a reliable approach to settling moral differences. Often such disputes are not settled despite extensive discussions and no obvious disagreement about other kinds of facts.
This can be explained if individuals are basing their judgments off differing sets of values that partially overlap. Even if both participants are naively assuming their own set of values is the set of moral values, the fact of overlapping will sometimes mean non-moral considerations which are significant to one’s values will also be significant for the other’s values. Other times, this won’t be the case.
For example, many pro-lifers naively assume that everyone places very high value on all human organisms, so they spend a lot of time arguing that an embryo or fetus is a distinct human organism. Anyone who is undecided or pro-choice who shares this value but wasn’t aware of the biological evidence that unborn humans are distinct organisms from their mothers may be swayed by such considerations.
On the other hand, many pro-choicers simply do not place equally high value on all human organisms, without counting other properties like sentience. Or — following Judith Jarvis Thomson in “A Defense of Abortion” — they may place equally high value on all human organisms, but place even greater value on the sort of bodily autonomy denied by laws against abortion.
Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.
I agree with all the claims you’re making about morality and about moral discussion. But I don’t quite see where any of this is giving me any new insights or tools. Sure, people have different but often overlapping values. I knew that. I think most adults who ever have conversations about morality know that. And we know that without worrying too much about the definition of morality and related words.
But I think everything you’ve said is also true about personal taste in non moral questions. I and my friends have different but overlapping taste in music, because we have distinct but overlapping set of desiderata for what we listen to. And sometimes, people get convinced to like something they previously didn’t. I want a meta-ethics that gives me some comparative advantage in dealing with moral problems, as compared to other sorts of disagreements. I had assumed that lukeprog was trying to say something specifically about morality, not just give a general and informal account of human motivation, values, and preferences.
Thus far, this sequence feels like a lot of buildup and groundwork that is true but mostly not in much dispute and mostly doesn’t seem to help me accomplish anything. Perhaps my previous comment should just have been a gentle nudge to lukeprog to get to the point.
This may be a case where not getting it wrong is the main point, even if getting it right is a let down.
My own view is quite similar to Luke’s and I find it useful when I hear a moral clam to try sorting out how much of the claim is value-expression and how much is about what needs to be done to promote values. Even if you don’t agree about values, it still helps to figure out what someone else’s fundamental values are and argue that what they’re advocating is out of line with their own values. People tend to be mistaken about how to fulfill their own values more than they are about how to fulfill their own taste in music.
Yes.
That is why I can interrogate what somebody means by ‘ought’ and then often show that by their own definition of ought, what they thought they ‘ought’ to do is not what they ‘ought’ to do.
Do you know of anything better?
OTOH, the problem remains that people act on their values, and that one persons actions can affect another person. Pluralistic morality is terrible at translating into a uniform set of rules that all are beholden to.
Why is that the test of a metaethical theory rather than the theory which best explains moral discourse? Categorical imperatives — if that’s what you’re referring to — are one answer to the best explanation of moral discourse, but then we’re stuck showing how categorical imperatives can hold...or accepting error theory.
Perhaps ‘referring to categorical imperatives’ is not the only or even the best explanation of moral discourse. See “The Error in the Error Theory” by Stephen Finlay.
Because there is a practical aspect to ethics. Moral discourse involves the idea that people should do the obligatory and refrain from the forbidden. -- irrespective of who they are. That needs explaining as well.
Moral discourse is about what to do, but it doesn’t seem to (at least always) be about what everyone must do for no prior reason.
Uh-huh. Is that an issue of commission rather than omission? Are people not obligated to refrain from theft murder and rape , their inclinations notwithstanding?
If by ‘obligated’ you mean it’s demanded by those who fear being the targets of those actions, yes. Or if you mean exercising restraint may be practically necessary to comply with certain values those actions thwart, yes. Or if you mean doing those things is likely to result in legal penalties, that’s often the case.
But if you mean it’s some simple fact that we’re morally obligated to restrain ourselves from doing certain things, no. Or at least I don’t see how that could even possibly be the case, and I already have a theory that explains why people might mistakenly think such a thing is the case (they mistake their own values for facts woven into the universe, so hypothetical imperatives look like categorical imperatives to them).
The ‘commission’ vs. ‘omission’ thing is often a matter of wording. Rape can be viewed as omitting to get proper permission, particularly when we’re talking about drugging, etc.
Well, I have a theory about how it could be the case. Objective morality doesn’;t have to be a fact-like thing that is paradoxically indetectable. It could be based on the other source of objectivity: logic and reason. It’s an analytical truth that you shouldn’t do to others what you wouldn’t want done to yourself. You are obliged to be moral so long as you can reason morally in the sense that you will be held responsible.
I’m skeptical that this statement is true, let alone an analytic truth. Different people have different desires. I take the Golden Rule to be a valuable heuristic, but no more than that.
What is your reason for believing that it is true as an absolute rule?
Just to clarify where you stand on norms: Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)
To stick my own neck out: I am a realist about values. I think there are facts about what we ought to believe and do. I think you have to be, to capture mathematical facts. This step taken, there’s no further commitment required to get ethical facts. Obviously, though, there are epistemic issues associated with the latter which are not associated with the former.
Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values—and possibly in quite different domains as well (politics, aesthetics, gardening)?
You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?
Facts as in true statements, or facts as in states-of-affiairs?
Facts in the disappointingly deflationary sense that
It is a fact that P if and only if P (and that’s all there is to say about facthood).
This position is a little underwhelming to any who seek a metaphysically substantive account of what makes things true, but it is a realist stance all the same (no?). If you have strong arguments against this or for an alternative, I’m interested to hear.
No, I wouldn’t say that. It would be a little odd to say anyone who doesn’t hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty. Instead, I would affirm:
In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
(I’m leaving ‘mathematically correct’ vague so different views on the nature of math are accommodated.)
In other words, the obligation relies on a goal. Or we could say normative answers require questions. Sometimes the implied question is so obvious, it seems strange to bother identifying it.
Yes.
I think that’s generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.
Taking your thoughts out of order,
What I was getting at is that this looks like complete moral relativism -‘right for me’ is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people’s values differ). I’m understanding that you’re willing to bite this bullet.
I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we’re talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.
This is fair.
This is an interesting proposal, but I’m not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn’t a rational person always try to believe what is correct? Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like
*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
or, more plausibly,
*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.
But if this is fair I’m back to wondering where the ought comes from.
While it is relativism, the focus is a bit different from ‘right for me.’ More like ‘this action measures up as right against standard Y’ where this Y is typically something I endorse.
For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving ‘the well-being of conscious creatures,’ then there’s a bit more going on than it just being right for you and me.
Or if I consider a practice morally right for the above reason, but you consider it morally wrong because it falls afoul of Rawls’ theory of justice, there’s more going on than it just being right for me and wrong for you. It’s more like I’m saying it’s right{Harris standard} and you’re saying it’s wrong{Rawls standard}. (...at least as far as cognitive content is concerned; we would usually also be expressing an expectation that others adhere to the standards we support.)
Of course the above are toy examples, since people’s values don’t tend to line up neatly with the simplifications of philosophers.
It’s not apparent that values differ just because judgments differ, so there’s still a lot of interesting work to find out if disagreements can be explained by differing descriptive beliefs. But, yes, once a disagreement is known to result from a pure difference in values, there isn’t a rational way to resolve it. It’s like Luke’s ‘tree falling’ example; once we know two people are using different definitions of ‘sound,’ the best we can do is make people aware of the difference in their claims.
Yep. While those are interesting standards to consider, it’s pretty clear to me that real world moral discourse is wider and more messy than any one normative theory. We can simply declare a normative theory as the moral standard — plenty of people have! — but the next person whose values are a better match for another normative theory is just going to disagree. On what basis do we find that one normative theory is correct when, descriptively, moral pluralism seems to characterize moral discourse?
If being rational consists in doing what it takes to fulfill one’s goals (I don’t know what the popular definition of ‘rationality’ is on this site), then it is still possible to be rational while holding a false belief, if a false belief helps fulfill one’s goals.
Now typically, false beliefs are unhelpful in this way, but I know at least Sinnott-Armstrong has talked about an ‘instrumentally justified’ belief that can go counter to having a true belief. The example I’ve used before is an Atheist married to a Theist whose goal of having a happy marriage would in fact go better if she could take a belief-altering pill so she would falsely take on her spouse’s belief in God.
Perhaps it comes from the way you view the concept of belief as implying a goal?
At risk of triggering the political mind-killer, I think there are some potentially problematic consequences of this view.
Suppose we don’t have good grounds for keeping one set of moral beliefs over another. Now suppose somebody offers to reward us for changing our views, or punish us for not changing. Should we change our views?
To go from the philosophical to the concrete: There are people in the world who are fanatics who are largely committed to some reading of the Bible/Koran/Little Green Book of Colonel Gaddafi/juche ideology of the Great Leader/whatever. Some of those people have armies and nuclear weapons. They can bring quite a lot of pressure to bear on other individuals to change their views to resemble those of the fanatic.
If rationalism can’t supply powerful reasons to maintain a non-fanatical worldview in the face of pressure to self-modify, that’s an objection to rationalism. Conversely, altering the moral beliefs of fanatics with access to nuclear weapons strikes me as an extremely important practical project. I suspect similar considerations will apply if you consider powerful unfriendly powerful AIs.
This reminds me of that line of Yeats, that “the best lack all conviction, while the worst are full of passionate intensity.” Ideological differences sometimes culminate in wars, and if you want to win those wars, you may need something better than “we have our morals and they have theirs.”
To sharpen the point slightly: There’s an asymmetry between the rationalists and the fanatics, which is that the rationalists are aware that they don’t have a rational justification for their terminal values, but the fanatic does have a [fanatical] justification. Worse, the fanatic has a justification to taboo thinking about the problem, and the rationalist doesn’t.
Just because morality is personal doesn’t make it not real. If you model people as agents with utility functions, the reason not to change is obvious—if you change, you won’t do all the things you value. Non-fanatics can do that the same as fanatics.
The difference comes when you factor in human irrationality. And sure, fanatics might resist where anyone sane would give in. “We will blow up this city unless you renounce the Leader,” something like that. But on the other hand, rational humans might resist techniques that play on human irrationality, where fanatics might even be more susceptible than average. Good cop / bad cop for example.
What about on a national scale, where, say, an evil mastermind threatens to nuke every nation that does not start worshiping the flying spaghetti monster? Well, what a rational society would do is compare benefits and downsides, and worship His Noodliness if it was worth it. Fanatics would get nuked. I fail to see how this is an argument for why we shouldn’t be rational.
And that’s why Strawmansylvania has never won a single battle, I agree. Just because morality is personal doesn’t make it unmomving.
Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?
That doesn’t seem right either. Somehow, an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs in response to events. There’s something badly wrong with a theory that can’t distinguish those cases.
Also, my apologies if this has been already discussed to death on LW or elsewhere—I spent some time poking and didn’t see anything on this point.
No, but it sets a high standard—If you value, say, the company of your family, then modifying to not want that (and therefore not spend much time with your family) costs as much as if you were kept away from your family by force for the rest of your life. So any threats have to be pretty damn serious, and maybe not even death would work if you know important secrets or do not highly value living without some key values.
I wouldn’t call all of those cases of modifying terminal values. From some quick googling (I didn’t know about the Vicar of Bray), what the Vicar of Bray cared about was being the vicar of Bray. What Pierre Laval cared about was being the head of the government and not being killed, maybe. So they’re maybe not good examples of changing terminal values, as opposed to instrumental ones.
Also “improving their moral beliefs as they mature” is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging? It’s really an example of how humans are emphatically not rational agents—we follow a bunch of evolved and cultural rules, which can appear to produce consistent behavior, but really have all these holes and internal conflicts. And things can change suddenly, without the sort of rational deliberation described above.
You could say the same about “improving our standards of scientific inference.” Circular? Perhaps, but it needn’t be a vicious circle. It’s pretty clear that we’ve accomplished it, so it must be possible.
I would cheerfully agree that humans aren’t rational and routinely change their minds about morality for non-rational reasons.
This is one of the things I was trying to get at. Ask when we should change our minds for non-rational reasons, and when we should attempt to change others’ minds using non-rational means.
The same examples I mentioned above work for these questions too.
Here’s what I had in mind with the reference to the Vicar of Bray. Imagine an individual with two terminal values: “Stay alive and employed” and the reigning orthodoxy at the moment. The individual sincerely believes in both, and whenever they start to conflict, changes their beliefs about the orthodoxy. He is quite sincere in advocating for the ruling ideology at each point in time; he really does believe in divine right of kings, just so long as it’s not a dangerous belief to hold.
The beliefs in question are at least potentially terminal moral beliefs. Without delving deep into the history, let’s stipulate for the purpose of the conversation that we’re talking about a rational actor who has a sequence of terminal moral beliefs about what constitutes a just government, and that these beliefs shift with the political climate.
Now for contrast, let’s consider a hypothetical rational but very selfish child. The child’s parents attempt and succeed in changing the child’s values to be less selfish. They do this by the usual parental tactics of punishment and example-setting, not by rational argument. By your social standard and mine, this is an improvement to the child.
Both the vicar and the child are updating their moral beliefs in response to outside pressure, not rational deliberation. The general consensus is that parents are obligated to bring up their children not to be overly self-centered and that reasoning with children is not a sufficient pedagogic technique But conversely that coercive government pressure on religion is ignoble.
Is this simply that you and I think “a change in moral beliefs, brought about by non-reasonable means is good (all else equal), if it significantly improves the beliefs of the subject by my standards”?
I’d agree with that. Maybe with some caveats, but generally yes.
I think the caveats will turn out to matter a lot. One of the things that human moral beliefs do, in practice, is give other humans some reasons to trust you. If I know that you are committed, for non-instrumental reasons, to avoid manipulating* me into changing my values, that gives me reasons to trust you. Conversely, if your moral view is that it’s legitimate to lie to people to make them do what you want, people will trust you less.
Obviously, people have incentives to lie about their true values. I think equally obviously, people are paying attention and looking hard for that sort of hypocrisy.
*This sentence is true for a range of possible expansions of “manipulating”.
My statement was more observational than ideal, though. Sure, a rational agent can be averse to manipulating other people (and humans often are too), because agents can care about whatever they want. But that doesn’t bear very strongly on how the language is used compared to the fact that in real-world usage I see people say things like “improved his morals” by only three standards: consistency, how much society approves, and how much you approve.
I think the worry here is that realizing ‘right’ and ‘wrong’ are relative to values might make us give up our values. Meanwhile, those who aren’t as reflective are able to hold more strongly onto their values.
But let’s look at your deep worry about fanatics with nukes. Does their disregard for life have to also be making some kind of abstract error for you to keep and act on your own strong regard for life?
Almost. What I’m worried about is that acknowledging or defining values to be arbitrary makes us less able to hold onto them and less able to convince others to adopt values that are safer for us. I think it’s nearly tautological that right and wrong are defined in terms of values.
The comment about fanatics with nuclear weapons wasn’t to indicate that that’s a particular nightmare of mine. It isn’t. Rather, that was to get at the point that moral philosophy isn’t simply an armchair exercise conducted amongst would-be rationalists—sometimes having a good theory a matter of life and death.
It’s very tempting, if you are firmly attached to your moral beliefs, and skeptical about your powers of rationality (as you should be!) to react to countervailing opinion by not listening. If you want to preserve the overall values of your society, and are skeptical of others’ powers of rational judgement, it’s tempting to have the heretic burnt at the stake, or the philosopher forced to drink the hemlock.
One of the undercurrents in the history of philosophy has been an effort to explain why a prudent society that doesn’t want to lose its moral footings can still allow dissent, including dissent about important values, that risks changing those values to something not obviously better. Philosophers, unsurprisingly, are drawn to philosophies that explain why they should be allowed to keep having their fun. And I think that’s a real and valuable goal that we shouldn’t lose sight of.
I’m willing to sacrifice a bunch of other theoretical properties to hang on to a moral philosophy that explains why we don’t need heresy trials and why nobody needs to bomb us for being infidels.
I don’t see much difference there. Relativist morality doesn’t have to be selfish (although the reverse is probably true).
OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him—there is no sense in which he ought not to have done what he did (assuming his belief system doesn’t inveigh against him offending yours)?
Touche.
Look, what I’m getting at is this. I assume we can agree that
“68 + 57 = 125” is true if and only if 68 + 57 = 125
This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, “Why ought I to believe that 68 + 57 = 125?”, and B answers, “Because it’s true”, then B is not really saying anything beyond, “Because it does”. B does not answer A’s question.
If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn’t be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside—you’ll surely allow this looks pretty dubious at least superficially.
There’s an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.
Such a person would be objectively afoul of a standard against randomly killing people. But let’s say he acted according to a standard which doesn’t care about that; we wouldn’t be able to tell him he did something wrong by that other standard. Nor could we tell him he did something wrong according to the one, correct standard (since there isn’t one).
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
Unless A was just asking to be walked through the calculation steps, then I agree B is not answering A’s question.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
If so, then I would offer the goal of “in order to be logically consistent.” There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
You can stop right there. If no theory of morality based on logical consistency is offered, you don’t have to do any more.
I suppose you mean “if no theory of morality based on logical consistency is offered”.
Of course, one could make an attempt to research reason-based metaethics before discarding the whole idea.
Agreed and edited.
I observe that you didn’t offer a pointer to a theory of morality based on logical consistency.
I agree with Eby: you are a troll. I’m done here.
For one thing, I don’t think logical consistency is quite the right criterion for reason-based objective morality. Pointing out that certain ideas are old and well documented, is offering a pointer, and is not trolling.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I’m not getting it.
This states the thought very clearly -thanks.
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to have a different rationality or just different values?
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
If you don’t want murderers running around killing people, then it’s consistent with your values to set up a situation in which murderers can expect to be punished, and one way to do that is to actually punish murderers.
Yes, that’s arbitrary, in the same sense that every preference you have is arbitrary. If you are going to act upon your preferences without deceiving yourself, you have to feel comfortable with doing arbitrary things.
I think you missed the point quite badly there. The point is that there is no rationally compelling reason to act on any arbitrary value. You gave the example of punishing murderers, but if every value is equally arbitrary that is no more justifiable than punishing stamp collectors or the left-handed. Having accepted moral subjectivism, you are faced with a choice between acting irrationality or not acting. OTOH, you haven’t exactly given moral objectivism a run for its money.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
This states the thought very clearly -thanks.
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though. Some people I know think it’s just foolish.
As is pointed out in the other thread from your post, plausibly our goal in the first instance is to show that it is rational not to kill people.
I don’t think that works. If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don’t think you can have multiple contradictory objective truths.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
It″ll work on people who already subscribe to rationaity, whereas relativism won’t.
What’s contradictory about the same object being judged differently by different standards?
Here’s a standard: return the width of the object in meters. Here’s another: return the number of wavelengths of blue light that make up the width of the object. And another: return the number of electrons in the object.
No Universally Compelling Arguments seems relevant here.
You realize that the linked post applies to arguments about mathematics or physics just as much as about morality.
Nothing. There’s nothing contradictory about multiple subjective truths or about multiple opinions, or about a single objective truth. But there is a contradiction in multiple objective truths about morality, as I said.
There isn’t any contradiction in multiple objective truths about different things; but the original hypothesis was multiple objective truths about the same thing, ie the morality of an action. If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.
The focus doesn’t have to be on John and Mary; it can be on the morality we’re referencing via John and Mary. By analogy, we could talk about John’s hometown and Mary’s hometown, without being subjectivists about the cities we are referencing.
That isn’t analogous, because towns aren;’t epistemic.
Hmm. Sounds like it would be helpful to taboo “objective” and “subjective”. Or perhaps this is my fault for not being entirely clear.
A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the “judgements” of the standard.
I should mention that this point that I use the word “morality” to indicate a particular standard—the morality-standard—that has the properties we normally associate with morality (“approving” of happiness, “disapproving” of murder, etc). This is the standard I would endorse (by, for example, acting to maximise “good” according to it) were I fully rational and reflectively consistent and non-akrasiac.
So the judgements of other standards are not moral judgements in the sense that they are not statements about the output of this standard. There would indeed be something inconsistent about asserting that other standards made statements about—ie. had the same output as—this standard.
Given that, and assuming your objections about “subjectivity” still exist, what do you mean by “subjective” such that the existence of other standards makes morality “subjective”, and this a problem?
It already seems that you must be resigned to your arguments failing to work on some minds: there is no god that will strike you down if you write a paperclip-maximising AIXI, for example.
Yep. Subjective statements about X can be phrased in objectivese. But that doesn’t make them objective statements about X.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Of course. But I think moral objectivism is better as an explanation, because it explains moral praise and blame as something other than a mistake; and I think moral objectivism is also better in practice because having some successful persuasion going on is better than having none.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
A mind, for example an alien, may be constructed such that it always executes the action that maximises the utility of some other standard. This utility function may be taken to be the “values” of the alien.
Moral praise and blame is not a mistake; whether certain actions result in an increase or decrease in the value of the moral utility function is a analytic fact. It is further an analytic fact that praise and blame, correctly applied, increases the output of the moral utility function, and that if we failed to do that, we would therefore fail to do the most moral thing.
By “subjective” I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that—whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens’t count for anything, because the shape of the world is not something about which Mary has the last word.
And there is one such standard in the universe, not one per agent?
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.
Ok, instead of meter measurements, let’s look at cubit measurements. Different ancient cultures represented significantly different physical lengths by ‘cubits.’ So a measurement of 10 cubits to a Roman was a different physical distance than 10 cubits to a Babylonian.
A given object could thus be ‘over ten cubits’ and ‘under ten cubits’ at the same time, though in different senses. Likewise, a given action can be ‘right’ and ‘wrong’ at the same time, though in different senses.
The surface judgments contradict, but there need not be any propositional conflict.
Isn’t this done by appealing to the values of the majority?
Only if — independent of values — certain values are rational and others are not.
Are you sure that people mean different things by ‘right’ and ‘wrong’, or are they just using different criteria to judge whether something is right or wrong.
It’s done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently. The point being that if correct reasoning eventually leads to uniform results, we call that objective.
Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?
What could ‘right’ and ‘wrong’ mean, beyond the criteria used to make the judgment?
Sure, if you’re talking about appealing to people to change their non-fundamental values to be more in line with their fundamental values. But I’ve still never heard how reason can have anything to say about fundamental values.
So far as I can tell, only by reasoning from their pre-existing values.
“Should be rewarded” and “should be punished”. If there was evidence of people saying that the good should be punished, that would indicate that some people are disagreeing about the meaning of good/right. Otherwise, disagreements are about criteria for assigning the term.
But not for all of them (since some of then get discarded) and not only from moral values (since people need to value reason to be reasoned with).
OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him—there is no sense in which he ought not to have done what he did (assuming his belief system doesn’t inveigh against him offending yours)?
Touche.
Look, what I’m getting at is this. I assume we can agree that
“68 + 57 = 125” is true if and only if 68 + 57 = 125
This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, “Why ought I to believe that 68 + 57 = 125?”, and B answers, “Because it’s true”, then B is not really saying anything beyond, “Because it does”. B does not answer A’s question.
If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn’t be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside—you’ll surely allow this looks pretty dubious at least superficially.
Taking your thoughts out of order,
What I was getting at is that this looks like complete moral relativism -‘right for me’ is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people’s values differ). I’m understanding that you’re willing to bite this bullet.
I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we’re talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.
This is fair.
This is an interesting proposal, but I’m not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn’t a rational person always try to believe what is correct? Or, to put the point another way, isn’t having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like
*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.
or, more plausibly,
*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.
But if this is fair I’m back to wondering where the ought comes from.
That is an important point. People often run on examples as much as or more than they do on definitions,and if their intuitions about examples are strong, that can be used to fix their definitions (ie give them revised definitions that serve their intuitions better).
The rest of the post contained good material that needed saying.