I meant “instrumental values” as opposed to “terminal values”, something valued as means to an end vs. something valued for its own sake.
It is universally acknowledged that human life is a terminal value. Also, the “happiness” of said life, whatever that means. In your terms, these two would be the harm-avoidance dimension, I suppose. (Is it a good name?)
Then, there are loyalty, respect, and purity, which I, for one, immediately reject as terminal values.
And then, there is fairness, which is difficult. Intuitively, I would prefer to live in a universe which is more fair than in one which is less fair. But, if it would costs lives, quality and happiness of these lives, etc, then… unclear. Fortunately, orthonormal’s article shows that if you take the long view, fairness doesn’t really oppose the principal terminal value in the standard moral “examples”, which (like mine) usually only look one short step ahead.
On the web site I linked to, the research suggests that for many people in our culture loyalty, purity, and respect are terminal values. Whether they’re regarded as such or not seems a function of ideology, with liberals restricting morality to harm-avoidance and fairness.
For myself, I have a hard time thinking of purity as a terminal value, but I definitely credit loyalty. I think it’s worse to secretly wrong a friend who trusts you than a stranger. I suppose that’s the sort of stance a utilitarian would want to talk me out of, but this seems a function of their societal vision rather than of moral intuition.
Utilitarianism seems to me a bureaucrat’s disease. The utilitarian asks what morality would make for the best society if everyone internalized it. From this perspective, the status of the fairness value is a hard problem: are you just concerned with total utility or does distribution matter—but my “intuition” is that fairness does matter because the guy at the bottom reaps no necessary benefit from increasing total utility (like the tortured guy in the SPECKS question). But again, this seems an ideological matter.
But the question of which moral sytematization would produce the best society is an interesting question only for utopians. The “official” operative morality is a compromise between ideological pressures and basic moral intuitions. Truly “adopting” utilitiarianism as a society isn’t an option: the further you deviate from moral intuition, the harder it is to get compliance. And what morality an individual person ought to adopt—that can’t be a decision based on morality; rather, it should respond to prudential considerations.
I think it’s worse to secretly wrong a friend who trusts you than a stranger. I suppose that’s the sort of stance a utilitarian would want to talk me out of, but this seems a function of their societal vision rather than of moral intuition.
No, I don’t think a consequentialist would want to talk you out of it. After all, the point is that loyalty is not a terminal value, not that it’s not a value at all. Wronging a friend would immediately lead to much more unhappiness than wronging a stranger. And the long-term consequences of unloyal-to-friends policy would be a much lower quality of life.
I meant “instrumental values” as opposed to “terminal values”, something valued as means to an end vs. something valued for its own sake.
It is universally acknowledged that human life is a terminal value. Also, the “happiness” of said life, whatever that means. In your terms, these two would be the harm-avoidance dimension, I suppose. (Is it a good name?)
Then, there are loyalty, respect, and purity, which I, for one, immediately reject as terminal values.
And then, there is fairness, which is difficult. Intuitively, I would prefer to live in a universe which is more fair than in one which is less fair. But, if it would costs lives, quality and happiness of these lives, etc, then… unclear. Fortunately, orthonormal’s article shows that if you take the long view, fairness doesn’t really oppose the principal terminal value in the standard moral “examples”, which (like mine) usually only look one short step ahead.
On the web site I linked to, the research suggests that for many people in our culture loyalty, purity, and respect are terminal values. Whether they’re regarded as such or not seems a function of ideology, with liberals restricting morality to harm-avoidance and fairness.
For myself, I have a hard time thinking of purity as a terminal value, but I definitely credit loyalty. I think it’s worse to secretly wrong a friend who trusts you than a stranger. I suppose that’s the sort of stance a utilitarian would want to talk me out of, but this seems a function of their societal vision rather than of moral intuition.
Utilitarianism seems to me a bureaucrat’s disease. The utilitarian asks what morality would make for the best society if everyone internalized it. From this perspective, the status of the fairness value is a hard problem: are you just concerned with total utility or does distribution matter—but my “intuition” is that fairness does matter because the guy at the bottom reaps no necessary benefit from increasing total utility (like the tortured guy in the SPECKS question). But again, this seems an ideological matter.
But the question of which moral sytematization would produce the best society is an interesting question only for utopians. The “official” operative morality is a compromise between ideological pressures and basic moral intuitions. Truly “adopting” utilitiarianism as a society isn’t an option: the further you deviate from moral intuition, the harder it is to get compliance. And what morality an individual person ought to adopt—that can’t be a decision based on morality; rather, it should respond to prudential considerations.
No, I don’t think a consequentialist would want to talk you out of it. After all, the point is that loyalty is not a terminal value, not that it’s not a value at all. Wronging a friend would immediately lead to much more unhappiness than wronging a stranger. And the long-term consequences of unloyal-to-friends policy would be a much lower quality of life.