As it’s been pointed out, this is not an anthropic problem, however there still is a paradox. I’m may be stating the obvious, but the root of the problem is that you’re doing something fishy when you say that the other people will think the same way and that your decision will theirs.
The proper way to make a decision is to have a probability distribution on the code of the other agents (which will include their prior on your code). From this I believe (but can’t prove) that you will take the correct course of action.
Newcomb like problem fall in the same category, the trick is that there is always a belief about someone’s decision making hidden in the problem.
As it’s been pointed out, this is not an anthropic problem, however there still is a paradox. I’m may be stating the obvious, but the root of the problem is that you’re doing something fishy when you say that the other people will think the same way and that your decision will theirs.
The proper way to make a decision is to have a probability distribution on the code of the other agents (which will include their prior on your code). From this I believe (but can’t prove) that you will take the correct course of action.
Newcomb like problem fall in the same category, the trick is that there is always a belief about someone’s decision making hidden in the problem.