Surely all hypotheses about what to do potentially consider modelling the surrounding environment—including other agents, as required. So: what is the supposed distinction between “single-player decision theory” and “game theory” actually supposed to be down to?
I cannot state the question formally (that’s part of the problem), but here’s an informal version: is there a definition of “maximizing utility given the environment” that is in some sense “optimal” when the environment is a big tangled computer program (or some even more complex thing) containing other similarly smart agents?
I do think that “maximize utility” is a perfectly good answer to the question of what to do—though it is kind-of passing the buck. However, for most agents, the environment is vast and complex compared to them, so they surely won’t behave remotely optimally. Since agent code seems to be easily copied, this “problem” may be with us for a while yet.
I am not sure I understand the question :-(
Is the “central insight of game theory” on: http://en.wikipedia.org/wiki/Game_theory ?
Surely all hypotheses about what to do potentially consider modelling the surrounding environment—including other agents, as required. So: what is the supposed distinction between “single-player decision theory” and “game theory” actually supposed to be down to?
I cannot state the question formally (that’s part of the problem), but here’s an informal version: is there a definition of “maximizing utility given the environment” that is in some sense “optimal” when the environment is a big tangled computer program (or some even more complex thing) containing other similarly smart agents?
I do think that “maximize utility” is a perfectly good answer to the question of what to do—though it is kind-of passing the buck. However, for most agents, the environment is vast and complex compared to them, so they surely won’t behave remotely optimally. Since agent code seems to be easily copied, this “problem” may be with us for a while yet.