Does it matter? The point is that (according to my morality computation) it is unfair to execute a 50%-probably innocent person, even though the “total number of saved lives” utility of this action may be greater than that of the alternative. And fairness of the procedure counts for something, even in terms of the “total number of saved lives”.
So, let’s say this hypothetical situation was put to you several times in sequence. The first time you decline on the basis of fairness, and the guy turns out to be innocent. Yay! The second time he walks out and murders three random people. Oops. After the hundredth time, you’ve saved fifty lives (because if the guy turns out to be a murderer you end up executing him anyway) and caused a hundred and thirty-five random people to be killed.
Upvoted for gracefully conceding a point. (EDIT: I mean, conceding the specific example, not necessarily the argument.)
I think that fairness matters a lot, but a big chunk of the reason for that can be expressed in terms of further consequences: if the connection between crime and punishment becomes more random, then punishment stops working so well as a deterrent, and more people will commit murder.
Being fair even when it’s costly affects other people’s decisions, not just the current case, and so a good consequentialist is very careful about fairness.
I thought of trying to assume that fairness only matters when other people are watching. But then, in my (admittedly already discredited) example, wouldn’t the solution be “release the man in front of everybody, but later kill him quietly. Or, even better, quietly administer a slow fatal poison before releasing?” Somehow, this is still unfair.
Well, that gets into issues of decision theory, and my intuition is that if you’re playing non-zero-sum games with other agents smart enough to deduce what you might think, it’s often wise to be predictably fair/honest.
(The idea you mention seems like “convince your partner to cooperate, then secretly defect”, which only works if you’re sure you can truly predict them and that they will falsely predict you. More often, it winds up as defect-defect.)
Hmm. Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental?
Well, maybe. I’m less sure than before.
But I’m still miles from relinquishing SPECKS :)
EDIT: Understood your comment better after reading the articles. Love the PD-3 and rationalist ethical inequality, thanks!
Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental?
Instrumental to what? To providing “utility”? Concepts of fairness arose to enhance inclusive fitness, not utility. If these norms are only instrumental, then so are the norms of harm-avoidance that we’re focused on.
Since these norms often (but not always) “over-determine” action, it’s easy to conceive of one of them explaining the other—so that, for example, fairness norms are seen as reifications of tactics for maximizing utility. But the empirical research indicates that people use at least five independent dimensions to make moral judgments: harm-avoidance, fairness, loyalty, respect, and purity.
EY’s program to “renormalize” morality assumes that our moral intuitions evolved to solve a function, but fall short because of design defects (relative to present needs). But it’s more likely that they evolved to solved different problems of social living.
I meant “instrumental values” as opposed to “terminal values”, something valued as means to an end vs. something valued for its own sake.
It is universally acknowledged that human life is a terminal value. Also, the “happiness” of said life, whatever that means. In your terms, these two would be the harm-avoidance dimension, I suppose. (Is it a good name?)
Then, there are loyalty, respect, and purity, which I, for one, immediately reject as terminal values.
And then, there is fairness, which is difficult. Intuitively, I would prefer to live in a universe which is more fair than in one which is less fair. But, if it would costs lives, quality and happiness of these lives, etc, then… unclear. Fortunately, orthonormal’s article shows that if you take the long view, fairness doesn’t really oppose the principal terminal value in the standard moral “examples”, which (like mine) usually only look one short step ahead.
On the web site I linked to, the research suggests that for many people in our culture loyalty, purity, and respect are terminal values. Whether they’re regarded as such or not seems a function of ideology, with liberals restricting morality to harm-avoidance and fairness.
For myself, I have a hard time thinking of purity as a terminal value, but I definitely credit loyalty. I think it’s worse to secretly wrong a friend who trusts you than a stranger. I suppose that’s the sort of stance a utilitarian would want to talk me out of, but this seems a function of their societal vision rather than of moral intuition.
Utilitarianism seems to me a bureaucrat’s disease. The utilitarian asks what morality would make for the best society if everyone internalized it. From this perspective, the status of the fairness value is a hard problem: are you just concerned with total utility or does distribution matter—but my “intuition” is that fairness does matter because the guy at the bottom reaps no necessary benefit from increasing total utility (like the tortured guy in the SPECKS question). But again, this seems an ideological matter.
But the question of which moral sytematization would produce the best society is an interesting question only for utopians. The “official” operative morality is a compromise between ideological pressures and basic moral intuitions. Truly “adopting” utilitiarianism as a society isn’t an option: the further you deviate from moral intuition, the harder it is to get compliance. And what morality an individual person ought to adopt—that can’t be a decision based on morality; rather, it should respond to prudential considerations.
I think it’s worse to secretly wrong a friend who trusts you than a stranger. I suppose that’s the sort of stance a utilitarian would want to talk me out of, but this seems a function of their societal vision rather than of moral intuition.
No, I don’t think a consequentialist would want to talk you out of it. After all, the point is that loyalty is not a terminal value, not that it’s not a value at all. Wronging a friend would immediately lead to much more unhappiness than wronging a stranger. And the long-term consequences of unloyal-to-friends policy would be a much lower quality of life.
Does it matter? The point is that (according to my morality computation) it is unfair to execute a 50%-probably innocent person, even though the “total number of saved lives” utility of this action may be greater than that of the alternative. And fairness of the procedure counts for something, even in terms of the “total number of saved lives”.
So, let’s say this hypothetical situation was put to you several times in sequence. The first time you decline on the basis of fairness, and the guy turns out to be innocent. Yay! The second time he walks out and murders three random people. Oops. After the hundredth time, you’ve saved fifty lives (because if the guy turns out to be a murderer you end up executing him anyway) and caused a hundred and thirty-five random people to be killed.
Success?
No :( Not when you put it like that...
Do you conclude then that fairness worth zero human lives? Not even a 0.0000000001% probability of saving a life should be sacrificed for its sake?
Maybe it’s my example that was stupid and better ones exist.
Upvoted for gracefully conceding a point. (EDIT: I mean, conceding the specific example, not necessarily the argument.)
I think that fairness matters a lot, but a big chunk of the reason for that can be expressed in terms of further consequences: if the connection between crime and punishment becomes more random, then punishment stops working so well as a deterrent, and more people will commit murder.
Being fair even when it’s costly affects other people’s decisions, not just the current case, and so a good consequentialist is very careful about fairness.
I thought of trying to assume that fairness only matters when other people are watching. But then, in my (admittedly already discredited) example, wouldn’t the solution be “release the man in front of everybody, but later kill him quietly. Or, even better, quietly administer a slow fatal poison before releasing?” Somehow, this is still unfair.
Well, that gets into issues of decision theory, and my intuition is that if you’re playing non-zero-sum games with other agents smart enough to deduce what you might think, it’s often wise to be predictably fair/honest.
(The idea you mention seems like “convince your partner to cooperate, then secretly defect”, which only works if you’re sure you can truly predict them and that they will falsely predict you. More often, it winds up as defect-defect.)
Hmm. Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental?
Well, maybe. I’m less sure than before.
But I’m still miles from relinquishing SPECKS :)
EDIT: Understood your comment better after reading the articles. Love the PD-3 and rationalist ethical inequality, thanks!
Instrumental to what? To providing “utility”? Concepts of fairness arose to enhance inclusive fitness, not utility. If these norms are only instrumental, then so are the norms of harm-avoidance that we’re focused on.
Since these norms often (but not always) “over-determine” action, it’s easy to conceive of one of them explaining the other—so that, for example, fairness norms are seen as reifications of tactics for maximizing utility. But the empirical research indicates that people use at least five independent dimensions to make moral judgments: harm-avoidance, fairness, loyalty, respect, and purity.
EY’s program to “renormalize” morality assumes that our moral intuitions evolved to solve a function, but fall short because of design defects (relative to present needs). But it’s more likely that they evolved to solved different problems of social living.
I meant “instrumental values” as opposed to “terminal values”, something valued as means to an end vs. something valued for its own sake.
It is universally acknowledged that human life is a terminal value. Also, the “happiness” of said life, whatever that means. In your terms, these two would be the harm-avoidance dimension, I suppose. (Is it a good name?)
Then, there are loyalty, respect, and purity, which I, for one, immediately reject as terminal values.
And then, there is fairness, which is difficult. Intuitively, I would prefer to live in a universe which is more fair than in one which is less fair. But, if it would costs lives, quality and happiness of these lives, etc, then… unclear. Fortunately, orthonormal’s article shows that if you take the long view, fairness doesn’t really oppose the principal terminal value in the standard moral “examples”, which (like mine) usually only look one short step ahead.
On the web site I linked to, the research suggests that for many people in our culture loyalty, purity, and respect are terminal values. Whether they’re regarded as such or not seems a function of ideology, with liberals restricting morality to harm-avoidance and fairness.
For myself, I have a hard time thinking of purity as a terminal value, but I definitely credit loyalty. I think it’s worse to secretly wrong a friend who trusts you than a stranger. I suppose that’s the sort of stance a utilitarian would want to talk me out of, but this seems a function of their societal vision rather than of moral intuition.
Utilitarianism seems to me a bureaucrat’s disease. The utilitarian asks what morality would make for the best society if everyone internalized it. From this perspective, the status of the fairness value is a hard problem: are you just concerned with total utility or does distribution matter—but my “intuition” is that fairness does matter because the guy at the bottom reaps no necessary benefit from increasing total utility (like the tortured guy in the SPECKS question). But again, this seems an ideological matter.
But the question of which moral sytematization would produce the best society is an interesting question only for utopians. The “official” operative morality is a compromise between ideological pressures and basic moral intuitions. Truly “adopting” utilitiarianism as a society isn’t an option: the further you deviate from moral intuition, the harder it is to get compliance. And what morality an individual person ought to adopt—that can’t be a decision based on morality; rather, it should respond to prudential considerations.
No, I don’t think a consequentialist would want to talk you out of it. After all, the point is that loyalty is not a terminal value, not that it’s not a value at all. Wronging a friend would immediately lead to much more unhappiness than wronging a stranger. And the long-term consequences of unloyal-to-friends policy would be a much lower quality of life.