About two years ago, it very much felt like freedom from authority was a terminal value for me. Those hated authoritarians and fascists were simply wrong, probably due to some fundamental neurological fault that could not be reasoned with. The very prototype of “terminal value differences”.
And yet here I am today, having been reasoned out of that “terminal value”, such that I even appreciate a certain aesthetic in bowing to a strong leader.
On what basis do you assert you were “reasoned out” of that position? For example, what about your change of mind causes you to reject a conversion (Edit: not conversation) metaphor?
If that was a terminal value, I’m afraid the term has lost much of its meaning to me. If it was not, if even the most fundamental-seeming moral feelings are subject to argument, I wonder if there is any coherent sense in which I could be said to have terminal values at all.
Yes, that’s the problem with the conversion metaphor. If reasoning does not cause changes in terminal values, then it seems like terminal values are not real for some sense of real. Yet moral anti-realism feels so incredibly unintuitive.
Edit: The other way you might respond is that you have realized that you still value freedom, but have recently realized it is not a terminal value. But that makes the example less useful in figuring out how actual terminal values work.
Perhaps then we should speak of what we want in terms of “terminal values”? For example, I might say that it is a terminal value of mine that I should not murder, or that freedom from authority is good.
But what does “terminal value” mean? Usually, it means that the value of something is not contingent on or derived from other facts or situations, like for example, I may value beautiful things in a way that is not derived from what they get me. The recursive chain of valuableness terminates at some set of values.
… if even the most fundamental-seeming moral feelings are subject to argument, I wonder if there is any coherent sense in which I could be said to have terminal values at all.
TimS mentioned moral anti-realism as one possibility. I have a favorable opinion of desire utilitarianism (search for pros and cons), which is a system that would be compatible with another possibility: real and objective values, but not necessarily any terminal values.
By analogy, such a situation would be a description for moral values like epistemological coherentism (versus foundationalism) describes knowledge. The mental model could be a web rather than a hierarchy. At least it’s a possibility—I don’t intend to argue for or against it right now as I have minimal evidence.
On what basis do you assert you were “reasoned out” of that position?
I’ll admit it’s rather shaky and I’d be saying the same thing if I’d merely been brainwashed. It doesn’t feel like it was precipitated by anything other than legitimate moral argument, though. If I can be brainwashed out of my “terminal values” so easily, and it really doesn’t feel like something to resist, then I’d like a sturdier basis on which to base my moral reasoning.
For example, what about your change of mind causes you to reject a conversation metaphor?
What is a conversation metaphor? I’m afraid I don’t see what you’re getting at.
The other way you might respond is that you have realized that you still value freedom, but have recently realized it is not a terminal value. But that makes the example less useful in figuring out how actual terminal values work.
I still value freedom in what feels like a fundamental way, I just also value hierarchy and social order now. What is gone is the extreme feeling of ickyness attached to authority, and the feeling of sacredness attached to freedom, and the belief that these things were terminal values.
The point is that things I’m likely to identify as “terminal values”, especially in the contexts of disagreements, are simply not that fundamental, and are much closer to derived surface heuristics or even tribal affiliation signals.
I feel like I’m not properly responding to your comment though.
Nyan, I think your freedom example is a little off. The converse of freedom is not bowing down to a leader. It’s being made to bow. People choosing to bow can be beautiful and rational, but I fail to see any beauty in someone bowing when their values dictate they should stand.
I’ll admit it’s rather shaky and I’d be saying the same thing if I’d merely been brainwashed. It doesn’t feel like it was precipitated by anything other than legitimate moral argument, though. If I can be brainwashed out of my “terminal values” so easily, and it really doesn’t feel like something to resist, then I’d like a sturdier basis on which to base my moral reasoning.
This could of course use more detail, unless you understand what I’m getting at.
The point is that things I’m likely to identify as “terminal values”, especially in the contexts of disagreements, are simply not that fundamental, and are much closer to derived surface heuristics or even tribal affiliation signals.
That’s certainly a serious risk, especially if terminal values work like axioms. There’s a strong incentive in debate or policy conflict to claim an instrumental value was terminal just to insulate it from attack. And then, by process of the failure mode identified in Keep Your Identity Small, one is likely to come to believe that the value actually is a terminal value for oneself.
I feel like I’m not properly responding to your comment though.
I took your essay as trying to make a meta-ethical point about “terminal values” and how using the term with an incoherent definition causes confusion in the debate. Parallel to when you said if we interact with an unshielded utility, it’s over, we’ve committed a type error. If that was not your intent, then I’ve misunderstood the essay.
I took your essay as trying to make a meta-ethical point about “terminal values” and how using the term with an incoherent definition causes confusion in the debate. Parallel to when you said if we interact with an unshielded utility, it’s over, we’ve committed a type error. If that was not your intent, then I’ve misunderstood the essay.
Oops, it wasn’t really about how we use terms or anything. I’m trying to communicate that we are not as morally wise as we sometimes pretend to be, or think we are. That Moral Philosophy is an unsolved problem, and we don’t even have a good idea how to solve it (unlike, say physics, where it’s unsolved, but the problem is understood).
This is in preparation for some other posts on the subject, the next of which will be posted tonight or soon.
That Moral Philosophy is an unsolved problem, and we don’t even have a good idea how to solve it
That said there has been centuries of work on the subject, that Eliezer unfortunately through out because VHM-utilitarianism is so mathematically elegant.
. If I can be brainwashed out of my “terminal values” so easily, and it really doesn’t feel like something to resist, then I’d like a sturdier basis on which to base my moral reasoning.
Are you sure you aren’t simply trading open ended beliefs for those that circularly support themselves to a greater extent? When you trust in an authority which tells you to trust in that authority, that’s sturdier.
I still value freedom in what feels like a fundamental way, I just also value hierarchy and social order now.
Gygax would say your alignment has shifted a step toward Lawful. I tend to prefer the Exalted system, which could represent such a shift through the purchase of a third dot in the virtue of Temperance.
On what basis do you assert you were “reasoned out” of that position? For example, what about your change of mind causes you to reject a conversion (Edit: not conversation) metaphor?
Yes, that’s the problem with the conversion metaphor. If reasoning does not cause changes in terminal values, then it seems like terminal values are not real for some sense of real. Yet moral anti-realism feels so incredibly unintuitive.
Edit: The other way you might respond is that you have realized that you still value freedom, but have recently realized it is not a terminal value. But that makes the example less useful in figuring out how actual terminal values work.
TimS mentioned moral anti-realism as one possibility. I have a favorable opinion of desire utilitarianism (search for pros and cons), which is a system that would be compatible with another possibility: real and objective values, but not necessarily any terminal values.
By analogy, such a situation would be a description for moral values like epistemological coherentism (versus foundationalism) describes knowledge. The mental model could be a web rather than a hierarchy. At least it’s a possibility—I don’t intend to argue for or against it right now as I have minimal evidence.
I’ll admit it’s rather shaky and I’d be saying the same thing if I’d merely been brainwashed. It doesn’t feel like it was precipitated by anything other than legitimate moral argument, though. If I can be brainwashed out of my “terminal values” so easily, and it really doesn’t feel like something to resist, then I’d like a sturdier basis on which to base my moral reasoning.
What is a conversation metaphor? I’m afraid I don’t see what you’re getting at.
I still value freedom in what feels like a fundamental way, I just also value hierarchy and social order now. What is gone is the extreme feeling of ickyness attached to authority, and the feeling of sacredness attached to freedom, and the belief that these things were terminal values.
The point is that things I’m likely to identify as “terminal values”, especially in the contexts of disagreements, are simply not that fundamental, and are much closer to derived surface heuristics or even tribal affiliation signals.
I feel like I’m not properly responding to your comment though.
Nyan, I think your freedom example is a little off. The converse of freedom is not bowing down to a leader. It’s being made to bow. People choosing to bow can be beautiful and rational, but I fail to see any beauty in someone bowing when their values dictate they should stand.
My fault for failing to clarify. There are roughly three ways one can talk about changes to an agent’s terminal values.
(1) Such changes never happen. (At a society level, this proposition appears to be false).
(2) Such changes happen through rational processes (i.e. reasoning).
(3) Such changes happen through non-rational processes (e.g. tribal affiliation + mindkilling).
I was using “conversion” as a metaphorical shorthand for the third type of change.
BTW, you might want to change “conversation” to “conversion” in the grandparent.
Ah! Thanks.
Ok. Then my answer to that is roughly this:
This could of course use more detail, unless you understand what I’m getting at.
That’s certainly a serious risk, especially if terminal values work like axioms. There’s a strong incentive in debate or policy conflict to claim an instrumental value was terminal just to insulate it from attack. And then, by process of the failure mode identified in Keep Your Identity Small, one is likely to come to believe that the value actually is a terminal value for oneself.
I took your essay as trying to make a meta-ethical point about “terminal values” and how using the term with an incoherent definition causes confusion in the debate. Parallel to when you said if we interact with an unshielded utility, it’s over, we’ve committed a type error. If that was not your intent, then I’ve misunderstood the essay.
Oops, it wasn’t really about how we use terms or anything. I’m trying to communicate that we are not as morally wise as we sometimes pretend to be, or think we are. That Moral Philosophy is an unsolved problem, and we don’t even have a good idea how to solve it (unlike, say physics, where it’s unsolved, but the problem is understood).
This is in preparation for some other posts on the subject, the next of which will be posted tonight or soon.
That said there has been centuries of work on the subject, that Eliezer unfortunately through out because VHM-utilitarianism is so mathematically elegant.
Are you sure you aren’t simply trading open ended beliefs for those that circularly support themselves to a greater extent? When you trust in an authority which tells you to trust in that authority, that’s sturdier.
Gygax would say your alignment has shifted a step toward Lawful. I tend to prefer the Exalted system, which could represent such a shift through the purchase of a third dot in the virtue of Temperance.