I don’t totally disagree, but see my reply to Gurkenglas as well as my reply to Andrew Sauer. Uncertainty doesn’t really save us, and the behavior isn’t really due to the worst-case-minimizing behavior. It can end up doing the same thing even if getting blown up is only slightly worse than not crossing! I’ll try to edit the post to add the argument wherein logical induction fails eventually (maybe not for a week, though). I’m much more inclined to say “Troll Bridge is too hard; we can’t demand so much of our counterfactuals” than I am to say “the counterfactual is actually perfectly reasonable” or “the problem won’t occur if we have reasonable uncertainty”.
I don’t totally disagree, but see my reply to Gurkenglas as well as my reply to Andrew Sauer. Uncertainty doesn’t really save us, and the behavior isn’t really due to the worst-case-minimizing behavior. It can end up doing the same thing even if getting blown up is only slightly worse than not crossing! I’ll try to edit the post to add the argument wherein logical induction fails eventually (maybe not for a week, though). I’m much more inclined to say “Troll Bridge is too hard; we can’t demand so much of our counterfactuals” than I am to say “the counterfactual is actually perfectly reasonable” or “the problem won’t occur if we have reasonable uncertainty”.