Seems to me you understood it and responded appropriately.
Game theory is inherently harder than individual rationality.
Yes it is. One important difference is that game-playing agents generally use mixed strategies. So we need to distinguish the reading the strategy source code of another player from reading the pseudo-random-number source code and “seed”. Rational agents will want to keep secrets.
A second difference is that the “preference structure” of the agent ought to be modeled as data about the agent, rather than data about the world. And some of this is information which a rational agent will wish to conceal as well.
So, as a toy model illustrating the compatibility of determinism and free will, your example (i.e. this posting) works fine. But as a way of modeling a flavor of rationality that “solves” one-shot PD and Newcomb-like problems in a “better” way than classical game theory, I think it doesn’t (yet?) work.
Slightly tangential: I have a semi-worked-out notion of a “fair” class of problems which includes Newcomb’s Problem because the predictor makes a decision based only on your action, not the “ritual of cognition” that preceded it (to borrow Eliezer’s phrase). But the one-shot PD doesn’t reside in this “fair” class because you don’t cooperate by just predicting the opponent’s action. You need to somehow tie the knot of infinite recursion, e.g. with quining or Löbian reasoning, and all such approaches seem to inevitably require inspecting the other guy’s “ritual of cognition”. This direction of research leads to all sorts of clever tricks, but ultimately it might be a dead end.
As for your main point, I completely agree. Any model that cannot accommodate limited information is bound to be useless in practice. I’m just trying to solve simple problems first, see what works and what doesn’t in very idealized conditions. Don’t worry, I’ll never neglect game theory—it was accidentally getting my hands on Ken Binmore’s “Fun and Games” that brought me back to math in the first place :-)
Seems to me you understood it and responded appropriately.
Yes it is. One important difference is that game-playing agents generally use mixed strategies. So we need to distinguish the reading the strategy source code of another player from reading the pseudo-random-number source code and “seed”. Rational agents will want to keep secrets.
A second difference is that the “preference structure” of the agent ought to be modeled as data about the agent, rather than data about the world. And some of this is information which a rational agent will wish to conceal as well.
So, as a toy model illustrating the compatibility of determinism and free will, your example (i.e. this posting) works fine. But as a way of modeling a flavor of rationality that “solves” one-shot PD and Newcomb-like problems in a “better” way than classical game theory, I think it doesn’t (yet?) work.
Slightly tangential: I have a semi-worked-out notion of a “fair” class of problems which includes Newcomb’s Problem because the predictor makes a decision based only on your action, not the “ritual of cognition” that preceded it (to borrow Eliezer’s phrase). But the one-shot PD doesn’t reside in this “fair” class because you don’t cooperate by just predicting the opponent’s action. You need to somehow tie the knot of infinite recursion, e.g. with quining or Löbian reasoning, and all such approaches seem to inevitably require inspecting the other guy’s “ritual of cognition”. This direction of research leads to all sorts of clever tricks, but ultimately it might be a dead end.
As for your main point, I completely agree. Any model that cannot accommodate limited information is bound to be useless in practice. I’m just trying to solve simple problems first, see what works and what doesn’t in very idealized conditions. Don’t worry, I’ll never neglect game theory—it was accidentally getting my hands on Ken Binmore’s “Fun and Games” that brought me back to math in the first place :-)