At risk of triggering the political mind-killer, I think there are some potentially problematic consequences of this view.
Once a disagreement is known to result from a pure difference in values, there isn’t a rational way to resolve it...the best we can do is make people aware of the difference in their claims.
Suppose we don’t have good grounds for keeping one set of moral beliefs over another. Now suppose somebody offers to reward us for changing our views, or punish us for not changing. Should we change our views?
To go from the philosophical to the concrete: There are people in the world who are fanatics who are largely committed to some reading of the Bible/Koran/Little Green Book of Colonel Gaddafi/juche ideology of the Great Leader/whatever. Some of those people have armies and nuclear weapons. They can bring quite a lot of pressure to bear on other individuals to change their views to resemble those of the fanatic.
If rationalism can’t supply powerful reasons to maintain a non-fanatical worldview in the face of pressure to self-modify, that’s an objection to rationalism. Conversely,
altering the moral beliefs of fanatics with access to nuclear weapons strikes me as an extremely important practical project. I suspect similar considerations will apply if you consider powerful unfriendly powerful AIs.
This reminds me of that line of Yeats, that “the best lack all conviction, while the worst are full of passionate intensity.” Ideological differences sometimes culminate in wars, and if you want to win those wars, you may need something better than “we have our morals and they have theirs.”
To sharpen the point slightly: There’s an asymmetry between the rationalists and the fanatics, which is that the rationalists are aware that they don’t have a rational justification for their terminal values, but the fanatic does have a [fanatical] justification. Worse, the fanatic has a justification to taboo thinking about the problem, and the rationalist doesn’t.
Just because morality is personal doesn’t make it not real. If you model people as agents with utility functions, the reason not to change is obvious—if you change, you won’t do all the things you value. Non-fanatics can do that the same as fanatics.
The difference comes when you factor in human irrationality. And sure, fanatics might resist where anyone sane would give in. “We will blow up this city unless you renounce the Leader,” something like that. But on the other hand, rational humans might resist techniques that play on human irrationality, where fanatics might even be more susceptible than average. Good cop / bad cop for example.
What about on a national scale, where, say, an evil mastermind threatens to nuke every nation that does not start worshiping the flying spaghetti monster? Well, what a rational society would do is compare benefits and downsides, and worship His Noodliness if it was worth it. Fanatics would get nuked. I fail to see how this is an argument for why we shouldn’t be rational.
if you want to win those wars, you may need something better than “we have our morals and they have theirs.”
And that’s why Strawmansylvania has never won a single battle, I agree. Just because morality is personal doesn’t make it unmomving.
Just because morality is personal doesn’t make it not real. If you model people as agents with utility functions, the reason not to change is obvious—if you change, you won’t do all the things you value.
Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?
That doesn’t seem right either. Somehow, an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs in response to events. There’s something badly wrong with a theory that can’t distinguish those cases.
Also, my apologies if this has been already discussed to death on LW or elsewhere—I spent some time poking and didn’t see anything on this point.
Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?
No, but it sets a high standard—If you value, say, the company of your family, then modifying to not want that (and therefore not spend much time with your family) costs as much as if you were kept away from your family by force for the rest of your life. So any threats have to be pretty damn serious, and maybe not even death would work if you know important secrets or do not highly value living without some key values.
an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs
I wouldn’t call all of those cases of modifying terminal values. From some quick googling (I didn’t know about the Vicar of Bray), what the Vicar of Bray cared about was being the vicar of Bray. What Pierre Laval cared about was being the head of the government and not being killed, maybe. So they’re maybe not good examples of changing terminal values, as opposed to instrumental ones.
Also “improving their moral beliefs as they mature” is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging? It’s really an example of how humans are emphatically not rational agents—we follow a bunch of evolved and cultural rules, which can appear to produce consistent behavior, but really have all these holes and internal conflicts. And things can change suddenly, without the sort of rational deliberation described above.
Also “improving their moral beliefs as they mature” is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging?
You could say the same about “improving our standards of scientific inference.” Circular? Perhaps, but it needn’t be a vicious circle. It’s pretty clear that we’ve accomplished it, so it must be possible.
I would cheerfully agree that humans aren’t rational and routinely change their minds about morality for non-rational reasons.
This is one of the things I was trying to get at. Ask when we should change our minds for non-rational reasons, and when we should attempt to change others’ minds using non-rational means.
The same examples I mentioned above work for these questions too.
Here’s what I had in mind with the reference to the Vicar of Bray. Imagine an individual with two terminal values: “Stay alive and employed” and the reigning orthodoxy at the moment. The individual sincerely believes in both, and whenever they start to conflict, changes their beliefs about the orthodoxy. He is quite sincere in advocating for the ruling ideology at each point in time; he really does believe in divine right of kings, just so long as it’s not a dangerous belief to hold.
The beliefs in question are at least potentially terminal moral beliefs. Without delving deep into the history, let’s stipulate for the purpose of the conversation that we’re talking about a rational actor who has a sequence of terminal moral beliefs about what constitutes a just government, and that these beliefs shift with the political climate.
Now for contrast, let’s consider a hypothetical rational but very selfish child. The child’s parents attempt and succeed in changing the child’s values to be less selfish. They do this by the usual parental tactics of punishment and example-setting, not by rational argument. By your social standard and mine, this is an improvement to the child.
Both the vicar and the child are updating their moral beliefs in response to outside pressure, not rational deliberation. The general consensus is that parents are obligated to bring up their children not to be overly self-centered and that reasoning with children is not a sufficient pedagogic technique But conversely that coercive government pressure on religion is ignoble.
Is this simply that you and I think “a change in moral beliefs, brought about by non-reasonable means is good (all else equal), if it significantly improves the beliefs of the subject by my standards”?
I think the caveats will turn out to matter a lot. One of the things that human moral beliefs do, in practice, is give other humans some reasons to trust you. If I know that you are committed, for non-instrumental reasons, to avoid manipulating* me into changing my values, that gives me reasons to trust you. Conversely, if your moral view is that it’s legitimate to lie to people to make them do what you want, people will trust you less.
Obviously, people have incentives to lie about their true values. I think equally obviously, people are paying attention and looking hard for that sort of hypocrisy.
*This sentence is true for a range of possible expansions of “manipulating”.
My statement was more observational than ideal, though. Sure, a rational agent can be averse to manipulating other people (and humans often are too), because agents can care about whatever they want. But that doesn’t bear very strongly on how the language is used compared to the fact that in real-world usage I see people say things like “improved his morals” by only three standards: consistency, how much society approves, and how much you approve.
I think the worry here is that realizing ‘right’ and ‘wrong’ are relative to values might make us give up our values. Meanwhile, those who aren’t as reflective are able to hold more strongly onto their values.
But let’s look at your deep worry about fanatics with nukes. Does their disregard for life have to also be making some kind of abstract error for you to keep and act on your own strong regard for life?
I think the worry here is that realizing ‘right’ and ‘wrong’ are relative to values might make us give up our values. Meanwhile, those who aren’t as reflective are able to hold more strongly onto their values.
Almost. What I’m worried about is that acknowledging or defining values to be arbitrary makes us less able to hold onto them and less able to convince others to adopt values that are safer for us. I think it’s nearly tautological that right and wrong are defined in terms of values.
The comment about fanatics with nuclear weapons wasn’t to indicate that that’s a particular nightmare of mine. It isn’t. Rather, that was to get at the point that moral philosophy isn’t simply an armchair exercise conducted amongst would-be rationalists—sometimes having a good theory a matter of life and death.
It’s very tempting, if you are firmly attached to your moral beliefs, and skeptical about your powers of rationality (as you should be!) to react to countervailing opinion by not listening. If you want to preserve the overall values of your society, and are skeptical of others’ powers of rational judgement, it’s tempting to have the heretic burnt at the stake, or the philosopher forced to drink the hemlock.
One of the undercurrents in the history of philosophy has been an effort to explain why a prudent society that doesn’t want to lose its moral footings can still allow dissent, including dissent about important values, that risks changing those values to something not obviously better. Philosophers, unsurprisingly, are drawn to philosophies that explain why they should be allowed to keep having their fun. And I think that’s a real and valuable goal that we shouldn’t lose sight of.
I’m willing to sacrifice a bunch of other theoretical properties to hang on to a moral philosophy that explains why we don’t need heresy trials and why nobody needs to bomb us for being infidels.
At risk of triggering the political mind-killer, I think there are some potentially problematic consequences of this view.
Suppose we don’t have good grounds for keeping one set of moral beliefs over another. Now suppose somebody offers to reward us for changing our views, or punish us for not changing. Should we change our views?
To go from the philosophical to the concrete: There are people in the world who are fanatics who are largely committed to some reading of the Bible/Koran/Little Green Book of Colonel Gaddafi/juche ideology of the Great Leader/whatever. Some of those people have armies and nuclear weapons. They can bring quite a lot of pressure to bear on other individuals to change their views to resemble those of the fanatic.
If rationalism can’t supply powerful reasons to maintain a non-fanatical worldview in the face of pressure to self-modify, that’s an objection to rationalism. Conversely, altering the moral beliefs of fanatics with access to nuclear weapons strikes me as an extremely important practical project. I suspect similar considerations will apply if you consider powerful unfriendly powerful AIs.
This reminds me of that line of Yeats, that “the best lack all conviction, while the worst are full of passionate intensity.” Ideological differences sometimes culminate in wars, and if you want to win those wars, you may need something better than “we have our morals and they have theirs.”
To sharpen the point slightly: There’s an asymmetry between the rationalists and the fanatics, which is that the rationalists are aware that they don’t have a rational justification for their terminal values, but the fanatic does have a [fanatical] justification. Worse, the fanatic has a justification to taboo thinking about the problem, and the rationalist doesn’t.
Just because morality is personal doesn’t make it not real. If you model people as agents with utility functions, the reason not to change is obvious—if you change, you won’t do all the things you value. Non-fanatics can do that the same as fanatics.
The difference comes when you factor in human irrationality. And sure, fanatics might resist where anyone sane would give in. “We will blow up this city unless you renounce the Leader,” something like that. But on the other hand, rational humans might resist techniques that play on human irrationality, where fanatics might even be more susceptible than average. Good cop / bad cop for example.
What about on a national scale, where, say, an evil mastermind threatens to nuke every nation that does not start worshiping the flying spaghetti monster? Well, what a rational society would do is compare benefits and downsides, and worship His Noodliness if it was worth it. Fanatics would get nuked. I fail to see how this is an argument for why we shouldn’t be rational.
And that’s why Strawmansylvania has never won a single battle, I agree. Just because morality is personal doesn’t make it unmomving.
Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?
That doesn’t seem right either. Somehow, an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs in response to events. There’s something badly wrong with a theory that can’t distinguish those cases.
Also, my apologies if this has been already discussed to death on LW or elsewhere—I spent some time poking and didn’t see anything on this point.
No, but it sets a high standard—If you value, say, the company of your family, then modifying to not want that (and therefore not spend much time with your family) costs as much as if you were kept away from your family by force for the rest of your life. So any threats have to be pretty damn serious, and maybe not even death would work if you know important secrets or do not highly value living without some key values.
I wouldn’t call all of those cases of modifying terminal values. From some quick googling (I didn’t know about the Vicar of Bray), what the Vicar of Bray cared about was being the vicar of Bray. What Pierre Laval cared about was being the head of the government and not being killed, maybe. So they’re maybe not good examples of changing terminal values, as opposed to instrumental ones.
Also “improving their moral beliefs as they mature” is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging? It’s really an example of how humans are emphatically not rational agents—we follow a bunch of evolved and cultural rules, which can appear to produce consistent behavior, but really have all these holes and internal conflicts. And things can change suddenly, without the sort of rational deliberation described above.
You could say the same about “improving our standards of scientific inference.” Circular? Perhaps, but it needn’t be a vicious circle. It’s pretty clear that we’ve accomplished it, so it must be possible.
I would cheerfully agree that humans aren’t rational and routinely change their minds about morality for non-rational reasons.
This is one of the things I was trying to get at. Ask when we should change our minds for non-rational reasons, and when we should attempt to change others’ minds using non-rational means.
The same examples I mentioned above work for these questions too.
Here’s what I had in mind with the reference to the Vicar of Bray. Imagine an individual with two terminal values: “Stay alive and employed” and the reigning orthodoxy at the moment. The individual sincerely believes in both, and whenever they start to conflict, changes their beliefs about the orthodoxy. He is quite sincere in advocating for the ruling ideology at each point in time; he really does believe in divine right of kings, just so long as it’s not a dangerous belief to hold.
The beliefs in question are at least potentially terminal moral beliefs. Without delving deep into the history, let’s stipulate for the purpose of the conversation that we’re talking about a rational actor who has a sequence of terminal moral beliefs about what constitutes a just government, and that these beliefs shift with the political climate.
Now for contrast, let’s consider a hypothetical rational but very selfish child. The child’s parents attempt and succeed in changing the child’s values to be less selfish. They do this by the usual parental tactics of punishment and example-setting, not by rational argument. By your social standard and mine, this is an improvement to the child.
Both the vicar and the child are updating their moral beliefs in response to outside pressure, not rational deliberation. The general consensus is that parents are obligated to bring up their children not to be overly self-centered and that reasoning with children is not a sufficient pedagogic technique But conversely that coercive government pressure on religion is ignoble.
Is this simply that you and I think “a change in moral beliefs, brought about by non-reasonable means is good (all else equal), if it significantly improves the beliefs of the subject by my standards”?
I’d agree with that. Maybe with some caveats, but generally yes.
I think the caveats will turn out to matter a lot. One of the things that human moral beliefs do, in practice, is give other humans some reasons to trust you. If I know that you are committed, for non-instrumental reasons, to avoid manipulating* me into changing my values, that gives me reasons to trust you. Conversely, if your moral view is that it’s legitimate to lie to people to make them do what you want, people will trust you less.
Obviously, people have incentives to lie about their true values. I think equally obviously, people are paying attention and looking hard for that sort of hypocrisy.
*This sentence is true for a range of possible expansions of “manipulating”.
My statement was more observational than ideal, though. Sure, a rational agent can be averse to manipulating other people (and humans often are too), because agents can care about whatever they want. But that doesn’t bear very strongly on how the language is used compared to the fact that in real-world usage I see people say things like “improved his morals” by only three standards: consistency, how much society approves, and how much you approve.
I think the worry here is that realizing ‘right’ and ‘wrong’ are relative to values might make us give up our values. Meanwhile, those who aren’t as reflective are able to hold more strongly onto their values.
But let’s look at your deep worry about fanatics with nukes. Does their disregard for life have to also be making some kind of abstract error for you to keep and act on your own strong regard for life?
Almost. What I’m worried about is that acknowledging or defining values to be arbitrary makes us less able to hold onto them and less able to convince others to adopt values that are safer for us. I think it’s nearly tautological that right and wrong are defined in terms of values.
The comment about fanatics with nuclear weapons wasn’t to indicate that that’s a particular nightmare of mine. It isn’t. Rather, that was to get at the point that moral philosophy isn’t simply an armchair exercise conducted amongst would-be rationalists—sometimes having a good theory a matter of life and death.
It’s very tempting, if you are firmly attached to your moral beliefs, and skeptical about your powers of rationality (as you should be!) to react to countervailing opinion by not listening. If you want to preserve the overall values of your society, and are skeptical of others’ powers of rational judgement, it’s tempting to have the heretic burnt at the stake, or the philosopher forced to drink the hemlock.
One of the undercurrents in the history of philosophy has been an effort to explain why a prudent society that doesn’t want to lose its moral footings can still allow dissent, including dissent about important values, that risks changing those values to something not obviously better. Philosophers, unsurprisingly, are drawn to philosophies that explain why they should be allowed to keep having their fun. And I think that’s a real and valuable goal that we shouldn’t lose sight of.
I’m willing to sacrifice a bunch of other theoretical properties to hang on to a moral philosophy that explains why we don’t need heresy trials and why nobody needs to bomb us for being infidels.