I’ve tried to understand why this is so important—there have been a number of attempts (many using the $5/$10 and the robot bridge examples) and they have all bounced off my brain. I’m sure this is my fault, not yours nor Martin Löb’s, but it keeps coming up and I can’t figure out where the missing fulcrum is for my caring about it.
So, counterfactual reasoning is impossible in formal systems because a false premise implies any conclusion. Fine—we already knew from Godel that formal systems are incomplete (or inconsistent, but we can discard those entirely). There are guaranteed to be truths that can’t be proven.
My best answer here is in the form of this paper that I wrote which talks about these dilemmas and a number of others. Decision theoretic flaws like the ones here are examples of subtle flaws in seemingly-reasonable frameworks for making decisions that may lead to unexpected failures in niche situations. For agents who are either vulnerable to spurious proofs or trolls, there are adversarial situations that could effectively exploit these weaknesses. These issues aren’t tied to incompleteness so much as they are just examples of ways that agents could be manipulable.
I’ve tried to understand why this is so important—there have been a number of attempts (many using the $5/$10 and the robot bridge examples) and they have all bounced off my brain. I’m sure this is my fault, not yours nor Martin Löb’s, but it keeps coming up and I can’t figure out where the missing fulcrum is for my caring about it.
So, counterfactual reasoning is impossible in formal systems because a false premise implies any conclusion. Fine—we already knew from Godel that formal systems are incomplete (or inconsistent, but we can discard those entirely). There are guaranteed to be truths that can’t be proven.
My best answer here is in the form of this paper that I wrote which talks about these dilemmas and a number of others. Decision theoretic flaws like the ones here are examples of subtle flaws in seemingly-reasonable frameworks for making decisions that may lead to unexpected failures in niche situations. For agents who are either vulnerable to spurious proofs or trolls, there are adversarial situations that could effectively exploit these weaknesses. These issues aren’t tied to incompleteness so much as they are just examples of ways that agents could be manipulable.