The consequentialist can’t Know the consequences of the actions, but he can list the likely possibilities and assign probabilities and error bars to the consequences. If there’s no difference in probability between the more and less desirable consequence, or if the difference is well within the error bars, then there’s no way to determine whether the action is right or wrong using consequentialist morality.
For instance, if there’s a 50⁄50 chance torture will give you the answer, there’s no way to make the right choice. If it’s 60⁄40 with +-30 error bars, you still can’t make a right choice (though the maximum error bar overlap is a matter of personal moral configuration). But if it’s 70⁄30 with +-5 error bars, a consequentialist can make a choice.
This is, of course, complicated by the fact that we’re loaded with cognitive biases that will lead most people to make probability mistakes in a “ticking bomb” situation, and guessing error bars is equally a difficult skill to master. That, and most real situations aren’t simple dilemmas, but intractable quagmires of cascading consequences.
I think you’re making an important point about the uncertainty of what impact our actions will have. However, I think the right way to about handling this issue is to put a bound on what impacts of our actions are likely to be significant.
As an extreme example, I think I have seen much evidence that clapping my hands once right now will have essentially no impact on the people living in Tripoli. Very likely clapping my hands will only affect myself (as no one is presently around) and probably in no huge way.
I have not done a formal statistical model to assess the significance, but I can probably state the significance is relatively low. If we can analyze what events are significant or not causally for others then we would certainly make the moral inference problem much simpler.
Good point, cutting off very low-impact consequences is a necessary addition to keep you from spending forever worrying. I think you could apply the significance cutoff when making the initial list of consequences, then assign probabilities and uncertainty to those consequences that made the cut.
Your example also reminded me of butterflies and hurricanes. It’s sensible to have a cutoff for extremely low probabilities too (there is some chance that clapping your hands will cause a hurricane, but it’s not worth considering).
The probability bound would solve the problem of cascading consequences too. For a choice, you can make some probability distribution that it will, say, benefit your child. You can then take each scenario you’ve thought of and ranked as significant and possible, and consider the impact on your grandchildren. But now you’re multiplying probabilities, and in most cases will quickly end up with insignificantly small probabilities for each secondary consequence, not worth worrying about.
(Something seems off with this idea I just added to yours—I feel like there should be some relation between the difference in probability and the difference in value, but I’m not sure if that’s actually so, or what it should be.)
The consequentialist can’t Know the consequences of the actions, but he can list the likely possibilities and assign probabilities and error bars to the consequences. If there’s no difference in probability between the more and less desirable consequence, or if the difference is well within the error bars, then there’s no way to determine whether the action is right or wrong using consequentialist morality.
For instance, if there’s a 50⁄50 chance torture will give you the answer, there’s no way to make the right choice. If it’s 60⁄40 with +-30 error bars, you still can’t make a right choice (though the maximum error bar overlap is a matter of personal moral configuration). But if it’s 70⁄30 with +-5 error bars, a consequentialist can make a choice.
This is, of course, complicated by the fact that we’re loaded with cognitive biases that will lead most people to make probability mistakes in a “ticking bomb” situation, and guessing error bars is equally a difficult skill to master. That, and most real situations aren’t simple dilemmas, but intractable quagmires of cascading consequences.
I think you’re making an important point about the uncertainty of what impact our actions will have. However, I think the right way to about handling this issue is to put a bound on what impacts of our actions are likely to be significant.
As an extreme example, I think I have seen much evidence that clapping my hands once right now will have essentially no impact on the people living in Tripoli. Very likely clapping my hands will only affect myself (as no one is presently around) and probably in no huge way.
I have not done a formal statistical model to assess the significance, but I can probably state the significance is relatively low. If we can analyze what events are significant or not causally for others then we would certainly make the moral inference problem much simpler.
Good point, cutting off very low-impact consequences is a necessary addition to keep you from spending forever worrying. I think you could apply the significance cutoff when making the initial list of consequences, then assign probabilities and uncertainty to those consequences that made the cut.
Your example also reminded me of butterflies and hurricanes. It’s sensible to have a cutoff for extremely low probabilities too (there is some chance that clapping your hands will cause a hurricane, but it’s not worth considering).
The probability bound would solve the problem of cascading consequences too. For a choice, you can make some probability distribution that it will, say, benefit your child. You can then take each scenario you’ve thought of and ranked as significant and possible, and consider the impact on your grandchildren. But now you’re multiplying probabilities, and in most cases will quickly end up with insignificantly small probabilities for each secondary consequence, not worth worrying about.
(Something seems off with this idea I just added to yours—I feel like there should be some relation between the difference in probability and the difference in value, but I’m not sure if that’s actually so, or what it should be.)