Somehow, I doubt I could achieve any more than 1% confidence that, say, the best plan to save 5 children in a burning building was to stab the fireman with a piece of glass and knock his ladder over and pull him off it so his body fell nearly straight down and could serve a cushion for the 5 children, who would each get off the body soon enough to let the next one follow. Actually, I was not only to assume that that is the best plan, but that it is certain to work, and if I don’t carry it out, the 5 children will certainly die, but not the fireman or me.
Or, alternately, that I’m on a motorboat, and there’s a shark in the water who will eat 5 people unless I hit the accelerator soon enough and hard enough that one of my existing passengers will certainly be knocked off the back, and will certainly be eaten by the shark (perhaps it was a second shark, which would raise the likelihood that I couldn’t do some fancy boatwork to get ’em all). I do not have time to tell anyone to hold on—I absolutely MUST goose the gas to get there that extra half second early that somehow makes the difference between all five being eaten and none of the five being eaten.
So your issue isn’t actually with (moral) reasoning under uncertainty or the trolley problem in general, it’s just with highly specific, really bad examples. Gotcha.
I think in general, if you find your plans to be complicated, involve causing someone else a large up-front cost, and you have very high confidence in the plan, the moral thing is to audit your certainty.
Somehow, I doubt I could achieve any more than 1% confidence that, say, the best plan to save 5 children in a burning building was to stab the fireman with a piece of glass and knock his ladder over and pull him off it so his body fell nearly straight down and could serve a cushion for the 5 children, who would each get off the body soon enough to let the next one follow. Actually, I was not only to assume that that is the best plan, but that it is certain to work, and if I don’t carry it out, the 5 children will certainly die, but not the fireman or me.
Or, alternately, that I’m on a motorboat, and there’s a shark in the water who will eat 5 people unless I hit the accelerator soon enough and hard enough that one of my existing passengers will certainly be knocked off the back, and will certainly be eaten by the shark (perhaps it was a second shark, which would raise the likelihood that I couldn’t do some fancy boatwork to get ’em all). I do not have time to tell anyone to hold on—I absolutely MUST goose the gas to get there that extra half second early that somehow makes the difference between all five being eaten and none of the five being eaten.
So your issue isn’t actually with (moral) reasoning under uncertainty or the trolley problem in general, it’s just with highly specific, really bad examples. Gotcha.
I think in general, if you find your plans to be complicated, involve causing someone else a large up-front cost, and you have very high confidence in the plan, the moral thing is to audit your certainty.