It turns out in an idealized model of intelligent AI, we can remove the dualistic assumptions of game theory by instead positing a reflective oracle, and the reflective oracle is allowed randomness in the territory (it is not just uncertainty in the map) to prevent paradoxes, and in particular the reflective oracle’s randomized answers are exactly the Nash-Equilibria of game theory, because there is a one-to-one function between a reflective oracle and a Nash-equilibrium.
Of course, whether it can transfer to our reality at all is pretty sketchy at best, but at least there is a solution at all:
The reflective oracle model doesn’t have all the properties I’m looking for—it still has the problem of treating utility as the optimization target rather than as a functional component of an iterative behavior reinforcement process. It also treats the utilities of different world-states as known ahead of time, rather than as the result of a search process, and assumes that computation is cost-free. To get a fully embedded theory of motivation, I expect that you would need something fundamentally different from classical game theory. For example, it probably wouldn’t use utility functions.
Re treating utility as the optimization target, I think this isn’t properly speaking an embedded agency problem, but rather an empirical problem of what the first AIs that automate everything will look like algorithmically, as there are algorithms that are able to be embedded in reality that do optimize the utility/reward like MCTS, and TurnTrout limits the post to the model-free policy gradient case like PPO and REINFORCE.
TurnTrout is correct to point out that not all RL algorithms optimize for the reward, and reward isn’t what the agent optimizes for by definition, but I think that it’s too limited in describing when RL does optimize for the utility/reward.
So I think the biggest difference between @TurnTrout and people like @gwern et al is whether or not model-based RL that does plan or model-free RL policy gradient algorithms come to dominate AI progress over the next decade.
Agree that the fact that it treats utilities of different world states as known and that the cost of computation is free makes it a very unrealistic model for human beings, and while something like the reflective oracle model is a possibility if we warped the laws of physics severely enough, such that we don’t have to care about the cost of computation at all, which then allows us to go from treating utilities as unknown to known in 1 step, this is an actual reason why I don’t expect the reflective oracle model to transfer to reality at all.
It turns out in an idealized model of intelligent AI, we can remove the dualistic assumptions of game theory by instead positing a reflective oracle, and the reflective oracle is allowed randomness in the territory (it is not just uncertainty in the map) to prevent paradoxes, and in particular the reflective oracle’s randomized answers are exactly the Nash-Equilibria of game theory, because there is a one-to-one function between a reflective oracle and a Nash-equilibrium.
Of course, whether it can transfer to our reality at all is pretty sketchy at best, but at least there is a solution at all:
https://arxiv.org/abs/1508.04145
The reflective oracle model doesn’t have all the properties I’m looking for—it still has the problem of treating utility as the optimization target rather than as a functional component of an iterative behavior reinforcement process. It also treats the utilities of different world-states as known ahead of time, rather than as the result of a search process, and assumes that computation is cost-free. To get a fully embedded theory of motivation, I expect that you would need something fundamentally different from classical game theory. For example, it probably wouldn’t use utility functions.
Re treating utility as the optimization target, I think this isn’t properly speaking an embedded agency problem, but rather an empirical problem of what the first AIs that automate everything will look like algorithmically, as there are algorithms that are able to be embedded in reality that do optimize the utility/reward like MCTS, and TurnTrout limits the post to the model-free policy gradient case like PPO and REINFORCE.
TurnTrout is correct to point out that not all RL algorithms optimize for the reward, and reward isn’t what the agent optimizes for by definition, but I think that it’s too limited in describing when RL does optimize for the utility/reward.
So I think the biggest difference between @TurnTrout and people like @gwern et al is whether or not model-based RL that does plan or model-free RL policy gradient algorithms come to dominate AI progress over the next decade.
Agree that the fact that it treats utilities of different world states as known and that the cost of computation is free makes it a very unrealistic model for human beings, and while something like the reflective oracle model is a possibility if we warped the laws of physics severely enough, such that we don’t have to care about the cost of computation at all, which then allows us to go from treating utilities as unknown to known in 1 step, this is an actual reason why I don’t expect the reflective oracle model to transfer to reality at all.