Wouldn’t that be an example of agents faring worse with more information / more “rationality”? Which should hint at a mistake in our conception of rationality instead of thinking it’s better to have less information / be less rational?
Doing worse with more knowledge means you are doing something very wrong. You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better. You definitely should not do worse. If you find yourself regretting your “rationality” then you should reconsider what is rational.
It’s interesting to note that in case, he specifically talked about coordination when saying that. And this post claims common knowledge can make rational agent specifically less able to cooperate. The given example is this game
Imagine that Alice and Bob are each asked to name dollar amounts between $0 and $100. Both Alice and Bob will get the lowest amount named, but whoever names that lowest number will additionally get a bonus of $10. No bonus is awarded in the case of a tie.
According to traditional game theory, the only rational equilibrium is for everyone to answer $0. This is because traditional game theory assumes common knowledge of the equilibrium; if any higher answer were given, there would be an incentive to undercut it.
You didn’t say the name of the game, so I can’t go read about it, but thinking about it myself, it seems like one policy rational agents can follow that would fare better than naming zero is picking a number at random. If my intuition is correct, that would let each of them win half of the time, and for the amount named to be pretty high (slightly less than 50?). An even better policy would be to randomly pick between the maximum and the maximum-1, which I expect would outperform even humans. With this policy, common knowledge/belief would definitely help.
The InfiniCheck message app problem is a bit more complicated. Thinking about it, It seems like the problem is that it always creates an absence of evidence (which is evidence of absence) which equals the evidence, i.e, the way the system is built, it always provides an equal amount of evidence and counter-evidence, so the agent is always perfectly uncertain, and demands/desires additional information. (If so, then a finite number of checks should make the problem terminate on the final check—correct?)
The question is whether it can be said that the demand/desire for additional information, rather than the additional information itself, creates the problem, or that these can’t actually be distinguished, cause that would just be calling the absence of evidence “demand” for information rather than just information (which it is).
Also, this actually seems like a case where humans would be affected in a similar way. Even with the one checkmark system people reason based on not seeing the checkmark, so except for the blur @Dacynmentioned, I expect people will suffer from this too.
I also want to point out that commitment solves this, whether or not you adopt something like an updateless decision theory. Cause once you simply said “I’m going to go here”, any number of checkmarks becomes irrelevant. (ah, re-reading that section the third time that’s actually exactly what you say, but I’m keeping this paragraph because if it wasn’t clear to me it might also have not have been clear to others).
Expanding on this from my comment:
Eliezer wrote this in Why Our Kind Can’t Cooperate:
It’s interesting to note that in case, he specifically talked about coordination when saying that. And this post claims common knowledge can make rational agent specifically less able to cooperate. The given example is this game
You didn’t say the name of the game, so I can’t go read about it, but thinking about it myself, it seems like one policy rational agents can follow that would fare better than naming zero is picking a number at random. If my intuition is correct, that would let each of them win half of the time, and for the amount named to be pretty high (slightly less than 50?). An even better policy would be to randomly pick between the maximum and the maximum-1, which I expect would outperform even humans. With this policy, common knowledge/belief would definitely help.
The InfiniCheck message app problem is a bit more complicated. Thinking about it, It seems like the problem is that it always creates an absence of evidence (which is evidence of absence) which equals the evidence, i.e, the way the system is built, it always provides an equal amount of evidence and counter-evidence, so the agent is always perfectly uncertain, and demands/desires additional information. (If so, then a finite number of checks should make the problem terminate on the final check—correct?)
The question is whether it can be said that the demand/desire for additional information, rather than the additional information itself, creates the problem, or that these can’t actually be distinguished, cause that would just be calling the absence of evidence “demand” for information rather than just information (which it is).
Also, this actually seems like a case where humans would be affected in a similar way. Even with the one checkmark system people reason based on not seeing the checkmark, so except for the blur @Dacyn mentioned, I expect people will suffer from this too.
I also want to point out that commitment solves this, whether or not you adopt something like an updateless decision theory. Cause once you simply said “I’m going to go here”, any number of checkmarks becomes irrelevant. (ah, re-reading that section the third time that’s actually exactly what you say, but I’m keeping this paragraph because if it wasn’t clear to me it might also have not have been clear to others).