Moral rules are about actions, but in consequentalism are judged strictly according to their consequences. Real world is what connects actions to consequences, otherwise we couldn’t talk about morality at all.
If you assume some vast simplification of the real world, or assume least-convenient-world, or something like that, the connection between actions and consequences completely changes, and so optimal moral rules in such case have no reason to be applicable to the real world.
Also if the real world changes significantly—let’s say we develop a fully reliable lie detector and start using it all the time (something I consider extremely unlikely in the real world). In such case the same actions would have different consequences, so consequentialism would say our moral rules controlling our actions should change. For example if we had lie detectors like that it would be a good idea to get every person routinely tested annually if they committed a serious crime like murder or bribery—something that would be a very bad idea in our real world.
Ah, I see. You meant that consequentialists can’t use simplified or extreme hypothetical scenarios to talk about consequentialist morality as applied to real decisions, not that they can’t do it at all. That was implicit in your ticking-time-bomb example but not explicit in your opening, and I missed it.
Shouldn’t thought experiments for consequentialism then emphasize the difficult task of correctly determining the consequences from minimal data? It seems like your thought experiments would want to be stripped down versions of real events to try to guess, from a random set of features (to mimic the randomness of which aspects of the situation you would notice at the time), what the consequences of a particular decision were. So you’d hold the decision and guess the consequences from the initial featureset.
There’s another issue too, which is that it is extraordinarily complicated to assess what the ultimate outcome of particular behavior is. I think this opens up a statistical question of what kinds of behaviors are “significant”, in the sense that if you are choosing between A and B, is it possible to distinguish A and B or are they approximately the same.
In some cases they won’t be, but I think that in very many they would.
That’s why I believe a person is responsible for the foreseeable consequences of their actions. If the chain of effects is so convoluted that a particular result cannot be foreseen than it should not be used to access the reasonableness of a person’s actions. Which is why I think general principles should guide large areas of our actions, such as refraining from coercion and fraud, even for a consequentialist.
I am sympathetic to this, but would at least want to modify it to responsibility for reasonably foreseeable consequences. What is foreseeable is endogenous—it is a function of our actions and our choices to seek information. We generally don’t want to absolve people of responsibility for actions which were not foreseeable only because they were reckless as to the consequences of their actions, and didn’t bother to gather sufficient information to make a proper decision.
Moral rules are about actions, but in consequentalism are judged strictly according to their consequences. Real world is what connects actions to consequences, otherwise we couldn’t talk about morality at all.
If you assume some vast simplification of the real world, or assume least-convenient-world, or something like that, the connection between actions and consequences completely changes, and so optimal moral rules in such case have no reason to be applicable to the real world.
Also if the real world changes significantly—let’s say we develop a fully reliable lie detector and start using it all the time (something I consider extremely unlikely in the real world). In such case the same actions would have different consequences, so consequentialism would say our moral rules controlling our actions should change. For example if we had lie detectors like that it would be a good idea to get every person routinely tested annually if they committed a serious crime like murder or bribery—something that would be a very bad idea in our real world.
Ah, I see. You meant that consequentialists can’t use simplified or extreme hypothetical scenarios to talk about consequentialist morality as applied to real decisions, not that they can’t do it at all. That was implicit in your ticking-time-bomb example but not explicit in your opening, and I missed it.
(I agree.)
Shouldn’t thought experiments for consequentialism then emphasize the difficult task of correctly determining the consequences from minimal data? It seems like your thought experiments would want to be stripped down versions of real events to try to guess, from a random set of features (to mimic the randomness of which aspects of the situation you would notice at the time), what the consequences of a particular decision were. So you’d hold the decision and guess the consequences from the initial featureset.
There’s another issue too, which is that it is extraordinarily complicated to assess what the ultimate outcome of particular behavior is. I think this opens up a statistical question of what kinds of behaviors are “significant”, in the sense that if you are choosing between A and B, is it possible to distinguish A and B or are they approximately the same.
In some cases they won’t be, but I think that in very many they would.
That’s why I believe a person is responsible for the foreseeable consequences of their actions. If the chain of effects is so convoluted that a particular result cannot be foreseen than it should not be used to access the reasonableness of a person’s actions. Which is why I think general principles should guide large areas of our actions, such as refraining from coercion and fraud, even for a consequentialist.
I am sympathetic to this, but would at least want to modify it to responsibility for reasonably foreseeable consequences. What is foreseeable is endogenous—it is a function of our actions and our choices to seek information. We generally don’t want to absolve people of responsibility for actions which were not foreseeable only because they were reckless as to the consequences of their actions, and didn’t bother to gather sufficient information to make a proper decision.