But if you randomize, then you have the risk of placing the coin at a location where it can be easily found, like on a table or on the floor. You could eliminate those risky locations by excluding them as alternatives in your randomization process, but that would mean including a chain of reasoning!
That doesn’t undo the gain from removing the obvious places. As an example, look at the probability-density graphics in http://thevirtuosi.blogspot.com/2011/10/linear-theory-of-battleship.html . Imagine you smooth them out to remove the ‘obvious’ places (initially high probability). Then you randomly pick, with that probability, each square. How does your other self undo this process?
(You also don’t formulate it right. If the memory-wiped self is told that finding the coin wins, and also that a former self placed the coin for him to find, wouldn’t he infer that his former self would be trying to help him win and so would begin his search at a Schelling point? He wouldn’t even consider trying to beat an adversarial strategy like ‘randomize’ because he doesn’t think there’s an adversary!)
If the memory-wiped self is told that finding the coin wins, and also that a former self placed the coin for him to find, wouldn’t he infer that his former self would be trying to help him win [...]?
The FINDER should hear the same story as the HIDER, with the only change that “win = find a coin”. The finder should also know that the hider received a false information that “win = not find a coin”. The finder should also know that the hider received an information that the finder will receive a false information what “win = find a coin”, et cetera. (It seems like an infinite chain of information, but both hider and finder only need to receive a finite pattern describing it.)
Now we have a meta-problem: After such explanations, wouldn’t both hider and finder suspect that they are given a false information? If yes, should they both expect probability 50% of being lied to? Or is there some asymetry, for example that rewarding “not find a coin” could make more sense to an outside spectator (which makes the rules) than rewarding “find a coin”? A perfectly rational spectator should prevent hider and finder from guessing which one of them is being lied to. But if the chance of both being right or wrong is exactly 50%, why should they even try?
All those are complications that needn’t arise with a slightly different formulation. Just imagine that we’re talking about someone who for fun decides to put such a challenge to their future self.
Anyway, we can postulate a person who hides something as best as they can, then they erase their own memory, then they decide to locate what they previously hid. Both hider version and searcher version try to do the best they can, because that’s the maximum amount of fun for both of them. (The searcher will reintegrate the hider portion of their memories afterwards)
I recall something like this coming up a few times in fiction—someone erases their memory of something because they need not to know it for a time; then, later, must re-discover it.
That doesn’t undo the gain from removing the obvious places. As an example, look at the probability-density graphics in http://thevirtuosi.blogspot.com/2011/10/linear-theory-of-battleship.html . Imagine you smooth them out to remove the ‘obvious’ places (initially high probability). Then you randomly pick, with that probability, each square. How does your other self undo this process?
(You also don’t formulate it right. If the memory-wiped self is told that finding the coin wins, and also that a former self placed the coin for him to find, wouldn’t he infer that his former self would be trying to help him win and so would begin his search at a Schelling point? He wouldn’t even consider trying to beat an adversarial strategy like ‘randomize’ because he doesn’t think there’s an adversary!)
The FINDER should hear the same story as the HIDER, with the only change that “win = find a coin”. The finder should also know that the hider received a false information that “win = not find a coin”. The finder should also know that the hider received an information that the finder will receive a false information what “win = find a coin”, et cetera. (It seems like an infinite chain of information, but both hider and finder only need to receive a finite pattern describing it.)
Now we have a meta-problem: After such explanations, wouldn’t both hider and finder suspect that they are given a false information? If yes, should they both expect probability 50% of being lied to? Or is there some asymetry, for example that rewarding “not find a coin” could make more sense to an outside spectator (which makes the rules) than rewarding “find a coin”? A perfectly rational spectator should prevent hider and finder from guessing which one of them is being lied to. But if the chance of both being right or wrong is exactly 50%, why should they even try?
All those are complications that needn’t arise with a slightly different formulation. Just imagine that we’re talking about someone who for fun decides to put such a challenge to their future self.
Anyway, we can postulate a person who hides something as best as they can, then they erase their own memory, then they decide to locate what they previously hid. Both hider version and searcher version try to do the best they can, because that’s the maximum amount of fun for both of them. (The searcher will reintegrate the hider portion of their memories afterwards)
I recall something like this coming up a few times in fiction—someone erases their memory of something because they need not to know it for a time; then, later, must re-discover it.
Memory Gambit.
Thank you.