The copy problem is well specified, though. Unlike the “predictor”. I clarified more in private. The worst part about Newcomb’s is that all the ex religious folks seem to substitute something they formerly knew as ‘god’ for predictor. The agent can also be further specified; e.g. as a finite Turing machine made of cogs and levers and tape with holes in it. The agent can’t simulate itself directly, of course, but it knows some properties of itself without simulation. E.g. it knows that in the alternative that it chooses to cooperate, it’s initial state was in set A—the states that result in cooperation, in the alternative that it chooses to defect, it’s initial state was in set B—the states that result in defection, and that no state is in both sets.
The copy problem is well specified, though. Unlike the “predictor”. I clarified more in private. The worst part about Newcomb’s is that all the ex religious folks seem to substitute something they formerly knew as ‘god’ for predictor. The agent can also be further specified; e.g. as a finite Turing machine made of cogs and levers and tape with holes in it. The agent can’t simulate itself directly, of course, but it knows some properties of itself without simulation. E.g. it knows that in the alternative that it chooses to cooperate, it’s initial state was in set A—the states that result in cooperation, in the alternative that it chooses to defect, it’s initial state was in set B—the states that result in defection, and that no state is in both sets.