Good point, cutting off very low-impact consequences is a necessary addition to keep you from spending forever worrying. I think you could apply the significance cutoff when making the initial list of consequences, then assign probabilities and uncertainty to those consequences that made the cut.
Your example also reminded me of butterflies and hurricanes. It’s sensible to have a cutoff for extremely low probabilities too (there is some chance that clapping your hands will cause a hurricane, but it’s not worth considering).
The probability bound would solve the problem of cascading consequences too. For a choice, you can make some probability distribution that it will, say, benefit your child. You can then take each scenario you’ve thought of and ranked as significant and possible, and consider the impact on your grandchildren. But now you’re multiplying probabilities, and in most cases will quickly end up with insignificantly small probabilities for each secondary consequence, not worth worrying about.
(Something seems off with this idea I just added to yours—I feel like there should be some relation between the difference in probability and the difference in value, but I’m not sure if that’s actually so, or what it should be.)
Good point, cutting off very low-impact consequences is a necessary addition to keep you from spending forever worrying. I think you could apply the significance cutoff when making the initial list of consequences, then assign probabilities and uncertainty to those consequences that made the cut.
Your example also reminded me of butterflies and hurricanes. It’s sensible to have a cutoff for extremely low probabilities too (there is some chance that clapping your hands will cause a hurricane, but it’s not worth considering).
The probability bound would solve the problem of cascading consequences too. For a choice, you can make some probability distribution that it will, say, benefit your child. You can then take each scenario you’ve thought of and ranked as significant and possible, and consider the impact on your grandchildren. But now you’re multiplying probabilities, and in most cases will quickly end up with insignificantly small probabilities for each secondary consequence, not worth worrying about.
(Something seems off with this idea I just added to yours—I feel like there should be some relation between the difference in probability and the difference in value, but I’m not sure if that’s actually so, or what it should be.)