If the memory-wiped self is told that finding the coin wins, and also that a former self placed the coin for him to find, wouldn’t he infer that his former self would be trying to help him win [...]?
The FINDER should hear the same story as the HIDER, with the only change that “win = find a coin”. The finder should also know that the hider received a false information that “win = not find a coin”. The finder should also know that the hider received an information that the finder will receive a false information what “win = find a coin”, et cetera. (It seems like an infinite chain of information, but both hider and finder only need to receive a finite pattern describing it.)
Now we have a meta-problem: After such explanations, wouldn’t both hider and finder suspect that they are given a false information? If yes, should they both expect probability 50% of being lied to? Or is there some asymetry, for example that rewarding “not find a coin” could make more sense to an outside spectator (which makes the rules) than rewarding “find a coin”? A perfectly rational spectator should prevent hider and finder from guessing which one of them is being lied to. But if the chance of both being right or wrong is exactly 50%, why should they even try?
All those are complications that needn’t arise with a slightly different formulation. Just imagine that we’re talking about someone who for fun decides to put such a challenge to their future self.
Anyway, we can postulate a person who hides something as best as they can, then they erase their own memory, then they decide to locate what they previously hid. Both hider version and searcher version try to do the best they can, because that’s the maximum amount of fun for both of them. (The searcher will reintegrate the hider portion of their memories afterwards)
I recall something like this coming up a few times in fiction—someone erases their memory of something because they need not to know it for a time; then, later, must re-discover it.
The FINDER should hear the same story as the HIDER, with the only change that “win = find a coin”. The finder should also know that the hider received a false information that “win = not find a coin”. The finder should also know that the hider received an information that the finder will receive a false information what “win = find a coin”, et cetera. (It seems like an infinite chain of information, but both hider and finder only need to receive a finite pattern describing it.)
Now we have a meta-problem: After such explanations, wouldn’t both hider and finder suspect that they are given a false information? If yes, should they both expect probability 50% of being lied to? Or is there some asymetry, for example that rewarding “not find a coin” could make more sense to an outside spectator (which makes the rules) than rewarding “find a coin”? A perfectly rational spectator should prevent hider and finder from guessing which one of them is being lied to. But if the chance of both being right or wrong is exactly 50%, why should they even try?
All those are complications that needn’t arise with a slightly different formulation. Just imagine that we’re talking about someone who for fun decides to put such a challenge to their future self.
Anyway, we can postulate a person who hides something as best as they can, then they erase their own memory, then they decide to locate what they previously hid. Both hider version and searcher version try to do the best they can, because that’s the maximum amount of fun for both of them. (The searcher will reintegrate the hider portion of their memories afterwards)
I recall something like this coming up a few times in fiction—someone erases their memory of something because they need not to know it for a time; then, later, must re-discover it.
Memory Gambit.
Thank you.