One thing that I’ve been wondering about (but not enough to turn it into a proper thread) is how to talk about consequentialist morality. Deontologists can use thought experiments, because they’re all about rules, and getting rid of unnecessary real world context makes it easier for them.
Consequentialists cannot use tricks like that—when asked if it’s ok to torture someone in a “ticking bomb” scenario, answering that real world doesn’t work like that due to possibility of mistakes, how likely is torture to work, slippery slope, potential abuse of torturing power once granted etc. is a perfectly valid reply.
So if we cannot really use thought experiments, how are we supposed to talk about it?
What prevents a consequentialist from accepting various hypothetical conditions arguendo and working out their consequences?
I’d consider it a possibly bad idea to actually do so, what with the known cognitive biases that might skew future decision making; but accepting arguendo that a particular consequentialist has overcome these biases, I can’t see a reason for her to refuse to consider least-convenient-world scenarios.
Moral rules are about actions, but in consequentalism are judged strictly according to their consequences. Real world is what connects actions to consequences, otherwise we couldn’t talk about morality at all.
If you assume some vast simplification of the real world, or assume least-convenient-world, or something like that, the connection between actions and consequences completely changes, and so optimal moral rules in such case have no reason to be applicable to the real world.
Also if the real world changes significantly—let’s say we develop a fully reliable lie detector and start using it all the time (something I consider extremely unlikely in the real world). In such case the same actions would have different consequences, so consequentialism would say our moral rules controlling our actions should change. For example if we had lie detectors like that it would be a good idea to get every person routinely tested annually if they committed a serious crime like murder or bribery—something that would be a very bad idea in our real world.
Ah, I see. You meant that consequentialists can’t use simplified or extreme hypothetical scenarios to talk about consequentialist morality as applied to real decisions, not that they can’t do it at all. That was implicit in your ticking-time-bomb example but not explicit in your opening, and I missed it.
Shouldn’t thought experiments for consequentialism then emphasize the difficult task of correctly determining the consequences from minimal data? It seems like your thought experiments would want to be stripped down versions of real events to try to guess, from a random set of features (to mimic the randomness of which aspects of the situation you would notice at the time), what the consequences of a particular decision were. So you’d hold the decision and guess the consequences from the initial featureset.
There’s another issue too, which is that it is extraordinarily complicated to assess what the ultimate outcome of particular behavior is. I think this opens up a statistical question of what kinds of behaviors are “significant”, in the sense that if you are choosing between A and B, is it possible to distinguish A and B or are they approximately the same.
In some cases they won’t be, but I think that in very many they would.
That’s why I believe a person is responsible for the foreseeable consequences of their actions. If the chain of effects is so convoluted that a particular result cannot be foreseen than it should not be used to access the reasonableness of a person’s actions. Which is why I think general principles should guide large areas of our actions, such as refraining from coercion and fraud, even for a consequentialist.
I am sympathetic to this, but would at least want to modify it to responsibility for reasonably foreseeable consequences. What is foreseeable is endogenous—it is a function of our actions and our choices to seek information. We generally don’t want to absolve people of responsibility for actions which were not foreseeable only because they were reckless as to the consequences of their actions, and didn’t bother to gather sufficient information to make a proper decision.
I doubt there actually are any strict consequentialists (or strict deontologists for that matter). E.g., would anyone be in favour of not punishing failed murder attempts?
To me, consequentialism/deontology always seem like post-hoc explanations of our not all too rational moral intuitions—useful to describe the ‘moral rules playing field’, but not saying very much about who people really decide how to act.
What does punishment have to do with consequentialism—Are you hypothesizing that not punishing failed murder attempts would reduce the number of successful murders, but that even people claiming to be consequentialists and claiming to value that consequence wouldn’t consider that solution? I would certainly be in favor of any reduction in punishment if it can be shown that the reduced punishment is more of a deterrent than the original.
Or are you saying that a murder attempt shouldn’t count as murder if no one actually died, and comparing that to your intuition of judging the intentions rather than the consequences? But intentions do matter when evaluating what effect a given punishment policy has on the decisions of potential murderers.
Well, strict consequentialists determine the goodness or badness of an action only by the consequences, not by the intentions of the actor. And that seems to fly in the face of our moral intuitions (as in the attempted murder example), which is why I hypothesized that there are not many strict consequentialist.
As you suggest, a possible way out would be to say that we punish even attempted murder, because it might discourage others to attempt (and possibly succeed) doing the same. And that is what I would call a ‘post-hoc explanation’.
The consequentialist can’t Know the consequences of the actions, but he can list the likely possibilities and assign probabilities and error bars to the consequences. If there’s no difference in probability between the more and less desirable consequence, or if the difference is well within the error bars, then there’s no way to determine whether the action is right or wrong using consequentialist morality.
For instance, if there’s a 50⁄50 chance torture will give you the answer, there’s no way to make the right choice. If it’s 60⁄40 with +-30 error bars, you still can’t make a right choice (though the maximum error bar overlap is a matter of personal moral configuration). But if it’s 70⁄30 with +-5 error bars, a consequentialist can make a choice.
This is, of course, complicated by the fact that we’re loaded with cognitive biases that will lead most people to make probability mistakes in a “ticking bomb” situation, and guessing error bars is equally a difficult skill to master. That, and most real situations aren’t simple dilemmas, but intractable quagmires of cascading consequences.
I think you’re making an important point about the uncertainty of what impact our actions will have. However, I think the right way to about handling this issue is to put a bound on what impacts of our actions are likely to be significant.
As an extreme example, I think I have seen much evidence that clapping my hands once right now will have essentially no impact on the people living in Tripoli. Very likely clapping my hands will only affect myself (as no one is presently around) and probably in no huge way.
I have not done a formal statistical model to assess the significance, but I can probably state the significance is relatively low. If we can analyze what events are significant or not causally for others then we would certainly make the moral inference problem much simpler.
Good point, cutting off very low-impact consequences is a necessary addition to keep you from spending forever worrying. I think you could apply the significance cutoff when making the initial list of consequences, then assign probabilities and uncertainty to those consequences that made the cut.
Your example also reminded me of butterflies and hurricanes. It’s sensible to have a cutoff for extremely low probabilities too (there is some chance that clapping your hands will cause a hurricane, but it’s not worth considering).
The probability bound would solve the problem of cascading consequences too. For a choice, you can make some probability distribution that it will, say, benefit your child. You can then take each scenario you’ve thought of and ranked as significant and possible, and consider the impact on your grandchildren. But now you’re multiplying probabilities, and in most cases will quickly end up with insignificantly small probabilities for each secondary consequence, not worth worrying about.
(Something seems off with this idea I just added to yours—I feel like there should be some relation between the difference in probability and the difference in value, but I’m not sure if that’s actually so, or what it should be.)
One thing that I’ve been wondering about (but not enough to turn it into a proper thread) is how to talk about consequentialist morality. Deontologists can use thought experiments, because they’re all about rules, and getting rid of unnecessary real world context makes it easier for them.
Consequentialists cannot use tricks like that—when asked if it’s ok to torture someone in a “ticking bomb” scenario, answering that real world doesn’t work like that due to possibility of mistakes, how likely is torture to work, slippery slope, potential abuse of torturing power once granted etc. is a perfectly valid reply.
So if we cannot really use thought experiments, how are we supposed to talk about it?
What prevents a consequentialist from accepting various hypothetical conditions arguendo and working out their consequences?
I’d consider it a possibly bad idea to actually do so, what with the known cognitive biases that might skew future decision making; but accepting arguendo that a particular consequentialist has overcome these biases, I can’t see a reason for her to refuse to consider least-convenient-world scenarios.
Moral rules are about actions, but in consequentalism are judged strictly according to their consequences. Real world is what connects actions to consequences, otherwise we couldn’t talk about morality at all.
If you assume some vast simplification of the real world, or assume least-convenient-world, or something like that, the connection between actions and consequences completely changes, and so optimal moral rules in such case have no reason to be applicable to the real world.
Also if the real world changes significantly—let’s say we develop a fully reliable lie detector and start using it all the time (something I consider extremely unlikely in the real world). In such case the same actions would have different consequences, so consequentialism would say our moral rules controlling our actions should change. For example if we had lie detectors like that it would be a good idea to get every person routinely tested annually if they committed a serious crime like murder or bribery—something that would be a very bad idea in our real world.
Ah, I see. You meant that consequentialists can’t use simplified or extreme hypothetical scenarios to talk about consequentialist morality as applied to real decisions, not that they can’t do it at all. That was implicit in your ticking-time-bomb example but not explicit in your opening, and I missed it.
(I agree.)
Shouldn’t thought experiments for consequentialism then emphasize the difficult task of correctly determining the consequences from minimal data? It seems like your thought experiments would want to be stripped down versions of real events to try to guess, from a random set of features (to mimic the randomness of which aspects of the situation you would notice at the time), what the consequences of a particular decision were. So you’d hold the decision and guess the consequences from the initial featureset.
There’s another issue too, which is that it is extraordinarily complicated to assess what the ultimate outcome of particular behavior is. I think this opens up a statistical question of what kinds of behaviors are “significant”, in the sense that if you are choosing between A and B, is it possible to distinguish A and B or are they approximately the same.
In some cases they won’t be, but I think that in very many they would.
That’s why I believe a person is responsible for the foreseeable consequences of their actions. If the chain of effects is so convoluted that a particular result cannot be foreseen than it should not be used to access the reasonableness of a person’s actions. Which is why I think general principles should guide large areas of our actions, such as refraining from coercion and fraud, even for a consequentialist.
I am sympathetic to this, but would at least want to modify it to responsibility for reasonably foreseeable consequences. What is foreseeable is endogenous—it is a function of our actions and our choices to seek information. We generally don’t want to absolve people of responsibility for actions which were not foreseeable only because they were reckless as to the consequences of their actions, and didn’t bother to gather sufficient information to make a proper decision.
I doubt there actually are any strict consequentialists (or strict deontologists for that matter). E.g., would anyone be in favour of not punishing failed murder attempts?
To me, consequentialism/deontology always seem like post-hoc explanations of our not all too rational moral intuitions—useful to describe the ‘moral rules playing field’, but not saying very much about who people really decide how to act.
What does punishment have to do with consequentialism—Are you hypothesizing that not punishing failed murder attempts would reduce the number of successful murders, but that even people claiming to be consequentialists and claiming to value that consequence wouldn’t consider that solution? I would certainly be in favor of any reduction in punishment if it can be shown that the reduced punishment is more of a deterrent than the original.
Or are you saying that a murder attempt shouldn’t count as murder if no one actually died, and comparing that to your intuition of judging the intentions rather than the consequences? But intentions do matter when evaluating what effect a given punishment policy has on the decisions of potential murderers.
Well, strict consequentialists determine the goodness or badness of an action only by the consequences, not by the intentions of the actor. And that seems to fly in the face of our moral intuitions (as in the attempted murder example), which is why I hypothesized that there are not many strict consequentialist.
As you suggest, a possible way out would be to say that we punish even attempted murder, because it might discourage others to attempt (and possibly succeed) doing the same. And that is what I would call a ‘post-hoc explanation’.
The consequentialist can’t Know the consequences of the actions, but he can list the likely possibilities and assign probabilities and error bars to the consequences. If there’s no difference in probability between the more and less desirable consequence, or if the difference is well within the error bars, then there’s no way to determine whether the action is right or wrong using consequentialist morality.
For instance, if there’s a 50⁄50 chance torture will give you the answer, there’s no way to make the right choice. If it’s 60⁄40 with +-30 error bars, you still can’t make a right choice (though the maximum error bar overlap is a matter of personal moral configuration). But if it’s 70⁄30 with +-5 error bars, a consequentialist can make a choice.
This is, of course, complicated by the fact that we’re loaded with cognitive biases that will lead most people to make probability mistakes in a “ticking bomb” situation, and guessing error bars is equally a difficult skill to master. That, and most real situations aren’t simple dilemmas, but intractable quagmires of cascading consequences.
I think you’re making an important point about the uncertainty of what impact our actions will have. However, I think the right way to about handling this issue is to put a bound on what impacts of our actions are likely to be significant.
As an extreme example, I think I have seen much evidence that clapping my hands once right now will have essentially no impact on the people living in Tripoli. Very likely clapping my hands will only affect myself (as no one is presently around) and probably in no huge way.
I have not done a formal statistical model to assess the significance, but I can probably state the significance is relatively low. If we can analyze what events are significant or not causally for others then we would certainly make the moral inference problem much simpler.
Good point, cutting off very low-impact consequences is a necessary addition to keep you from spending forever worrying. I think you could apply the significance cutoff when making the initial list of consequences, then assign probabilities and uncertainty to those consequences that made the cut.
Your example also reminded me of butterflies and hurricanes. It’s sensible to have a cutoff for extremely low probabilities too (there is some chance that clapping your hands will cause a hurricane, but it’s not worth considering).
The probability bound would solve the problem of cascading consequences too. For a choice, you can make some probability distribution that it will, say, benefit your child. You can then take each scenario you’ve thought of and ranked as significant and possible, and consider the impact on your grandchildren. But now you’re multiplying probabilities, and in most cases will quickly end up with insignificantly small probabilities for each secondary consequence, not worth worrying about.
(Something seems off with this idea I just added to yours—I feel like there should be some relation between the difference in probability and the difference in value, but I’m not sure if that’s actually so, or what it should be.)