Here’s an attempt to ground this somewhat concretely.
Suppose there’s an iterated prisoner’s dilemma contest. At any iteration an agent can look at the history of plays that itself and its opponent have made.
Suppose that TitForTatBot looks at the history, and sees that there’s been 100 rounds so far, and in every one it has defected and its opponent has cooperated. It proceeds to cooperate, because its opponent cooperated in the previous round. And so the “actual” game history will never be (D,C) x 100. What’s happened here is that someone has instantiated a TitForTatBot and lied to it. It’s not impossible that TitForTatBot will observe this history, but it’s impossible that this history actually happened, in some sense that I claim we care about.
Hmm, no, I still don’t think this works. In the scenario you describe, it seems to me that TitForTatBot neither observed the specified history, nor did it actually happen—but it does observe finding itself in a scenario where that history (apparently) happened, and it does indeed actually find itself in a scenario where that history (apparently) happened.
Now, I think that your example does bring up an interesting and relevant point, namely: when should an agent question whether some of the things it seems to know or observe are actually false or illusionary? Surely the answer is not “never”, else the agent will be easy to fool, and will make some very foolish decisions! So perhaps TitForTatBot (if we suppose that it’s not just a “bot” but also has some higher reasoning functions) might think: “Hmm, I defected 100 times? Sounds made-up, I think somebody’s been tampering with my memory! The proverbial evil neurosurgeons strike again!”
But consider how this might work in the “Bomb” case. Should I find myself in the “Bomb” scenario, I might think: “A predictor that’s only been wrong one out of a trillion trillion times? And it’s just been wrong again? And there’s a bomb in this here Left box, and me an FDT agent, no less! Something doesn’t add up… perhaps one or more of the things I think I know, aren’t so!” And this seems like a reasonable enough thought. But surely it would then be far more reasonable to question the whole “one-in-a-trillion-trillion-accurate predictor” business, than to say “This bomb I see in front of me is fake, and the box is also fake! This whole scenario is fake!”
Right? I mean… how do I know this stuff about the predictor, and its accuracy? It’s a pretty outlandish claim, isn’t it—one mistake out of a trillion trillion? How sure am I that I’m privy to all the information about the predictor’s past performance? And really, the whole situation is weird: I’m the last person in existence, apparently? And so on… but the reality of me being alive, not wanting to die, and staring at an actual bomb right in front of me—well, if I trust anything, I’ll trust the evidence of my senses before I trust in some stuff I’ve been told about a long-dead predictor, or what have you.
Anyway, this seems to me to be the kind of skepticism that makes sense in a situation like this. And none of it seems to lead to the sort of analysis described by the FDT proponents…
Here’s an attempt to ground this somewhat concretely.
Suppose there’s an iterated prisoner’s dilemma contest. At any iteration an agent can look at the history of plays that itself and its opponent have made.
Suppose that TitForTatBot looks at the history, and sees that there’s been 100 rounds so far, and in every one it has defected and its opponent has cooperated. It proceeds to cooperate, because its opponent cooperated in the previous round. And so the “actual” game history will never be (D,C) x 100. What’s happened here is that someone has instantiated a TitForTatBot and lied to it. It’s not impossible that TitForTatBot will observe this history, but it’s impossible that this history actually happened, in some sense that I claim we care about.
Hmm, no, I still don’t think this works. In the scenario you describe, it seems to me that TitForTatBot neither observed the specified history, nor did it actually happen—but it does observe finding itself in a scenario where that history (apparently) happened, and it does indeed actually find itself in a scenario where that history (apparently) happened.
Now, I think that your example does bring up an interesting and relevant point, namely: when should an agent question whether some of the things it seems to know or observe are actually false or illusionary? Surely the answer is not “never”, else the agent will be easy to fool, and will make some very foolish decisions! So perhaps TitForTatBot (if we suppose that it’s not just a “bot” but also has some higher reasoning functions) might think: “Hmm, I defected 100 times? Sounds made-up, I think somebody’s been tampering with my memory! The proverbial evil neurosurgeons strike again!”
But consider how this might work in the “Bomb” case. Should I find myself in the “Bomb” scenario, I might think: “A predictor that’s only been wrong one out of a trillion trillion times? And it’s just been wrong again? And there’s a bomb in this here Left box, and me an FDT agent, no less! Something doesn’t add up… perhaps one or more of the things I think I know, aren’t so!” And this seems like a reasonable enough thought. But surely it would then be far more reasonable to question the whole “one-in-a-trillion-trillion-accurate predictor” business, than to say “This bomb I see in front of me is fake, and the box is also fake! This whole scenario is fake!”
Right? I mean… how do I know this stuff about the predictor, and its accuracy? It’s a pretty outlandish claim, isn’t it—one mistake out of a trillion trillion? How sure am I that I’m privy to all the information about the predictor’s past performance? And really, the whole situation is weird: I’m the last person in existence, apparently? And so on… but the reality of me being alive, not wanting to die, and staring at an actual bomb right in front of me—well, if I trust anything, I’ll trust the evidence of my senses before I trust in some stuff I’ve been told about a long-dead predictor, or what have you.
Anyway, this seems to me to be the kind of skepticism that makes sense in a situation like this. And none of it seems to lead to the sort of analysis described by the FDT proponents…