And this is where the fundamental AGI-doom arguments – all these coherence theorems, utility-maximization frameworks, et cetera – come in. At their core, they’re claims that any “artificial generally intelligent system capable of autonomously optimizing the world the way humans can” would necessarily be well-approximated as a game-theoretic agent. Which, in turn, means that any system that has the set of capabilities the AI researchers ultimately want their AI models to have, would inevitably have a set of potentially omnicidal failure modes.
This is my crux with people who have 90+% P(doom): will vNM expected utility maximization be a good approximation of the behavior of TAI? You argue that it will, but I expect that it won’t.
My thinking related to this crux is informed less by the behaviors of current AI systems (although they still influence it to some extent) than by the failure of the agent foundations agenda. The dream 10 years ago was that if we started by modeling AGI as an vNM expected utility maximizer, and then gradually added more and more details to our model to account for differences between the idealized model and real-world AI systems, we would end up with an accurate theoretical system for predicting the behaviors AGI would exhibit. It would be a similar process to how physicists start with an idealized problem setup and add in details like friction or relativistic corrections.
But that isn’t what ended up happening. Agent foundations researchers ended up getting stuck on the cluster of problems collectively described as embedded agency, unable to square the dualistic assumptions of expected utility theory and Bayesianism with the embedded structure of real-world AI systems. The sub-problems of embedded agency are many and too varied to allow one elegant theorem to fix everything. Instead, they point to a fundamental flaw in the expected utility maximizer model, suggesting that it isn’t as widely applicable as early AI safety researchers thought.
The failure of the agent foundations agenda has led me to believe that expected utility maximization is only a good approximation for mostly-unembedded systems, and that an accurate theoretical model of advanced AI behavior (if such a thing is possible) would require a fundamentally different, less dualistic set of concepts. Coherence theorems and decision-theoretic arguments still rely on the old, unembedded assumptions and therefore don’t provide an accurate predictive model.
I agree that the agent-foundations research has been somewhat misaimed from the start, but I buy this explanation of John’s regarding where it went wrong and how to fix it. Basically, what we need to figure out is a theory of embedded world-modeling, which would capture the aspect of reality where the universe naturally decomposes into hierarchically arranged sparsely interacting subsystems. Our agent would then be a perfect game-theoretic agent, but defined over that abstract (and lazy) world-model, rather than over the world directly.
This would take care of agents needing to be “bigger” than the universe, counterfactuals, the “outside-view” problem, the realizability and the self-reference problems, the problem of hypothesis spaces, and basically everything else that’s problematic about embedded agency.
A theory of embedded world-modeling would be an improvement over current predictive models of advanced AI behavior, but it wouldn’t be the whole story. Game theory makes dualistic assumptions too (e.g., by treating the decision process as not having side effects), so we would also have to rewrite it into an embedded model of motivation.
Cartesian frames are one of the few lines of agent foundations research in the past few years that seem promising, due to allowing for greater flexibility in defining agent-environment boundaries. Preferably, we would have a model that lets us avoid having to postulate an agent-environment boundary at all. Combining a successor to Cartesian frames with an embedded theory of motivation, likely some form of active inference, might give us an accurate overarching theory of embedded behavior.
It turns out in an idealized model of intelligent AI, we can remove the dualistic assumptions of game theory by instead positing a reflective oracle, and the reflective oracle is allowed randomness in the territory (it is not just uncertainty in the map) to prevent paradoxes, and in particular the reflective oracle’s randomized answers are exactly the Nash-Equilibria of game theory, because there is a one-to-one function between a reflective oracle and a Nash-equilibrium.
Of course, whether it can transfer to our reality at all is pretty sketchy at best, but at least there is a solution at all:
The reflective oracle model doesn’t have all the properties I’m looking for—it still has the problem of treating utility as the optimization target rather than as a functional component of an iterative behavior reinforcement process. It also treats the utilities of different world-states as known ahead of time, rather than as the result of a search process, and assumes that computation is cost-free. To get a fully embedded theory of motivation, I expect that you would need something fundamentally different from classical game theory. For example, it probably wouldn’t use utility functions.
Re treating utility as the optimization target, I think this isn’t properly speaking an embedded agency problem, but rather an empirical problem of what the first AIs that automate everything will look like algorithmically, as there are algorithms that are able to be embedded in reality that do optimize the utility/reward like MCTS, and TurnTrout limits the post to the model-free policy gradient case like PPO and REINFORCE.
TurnTrout is correct to point out that not all RL algorithms optimize for the reward, and reward isn’t what the agent optimizes for by definition, but I think that it’s too limited in describing when RL does optimize for the utility/reward.
So I think the biggest difference between @TurnTrout and people like @gwern et al is whether or not model-based RL that does plan or model-free RL policy gradient algorithms come to dominate AI progress over the next decade.
Agree that the fact that it treats utilities of different world states as known and that the cost of computation is free makes it a very unrealistic model for human beings, and while something like the reflective oracle model is a possibility if we warped the laws of physics severely enough, such that we don’t have to care about the cost of computation at all, which then allows us to go from treating utilities as unknown to known in 1 step, this is an actual reason why I don’t expect the reflective oracle model to transfer to reality at all.
This is my crux with people who have 90+% P(doom): will vNM expected utility maximization be a good approximation of the behavior of TAI? You argue that it will, but I expect that it won’t.
My thinking related to this crux is informed less by the behaviors of current AI systems (although they still influence it to some extent) than by the failure of the agent foundations agenda. The dream 10 years ago was that if we started by modeling AGI as an vNM expected utility maximizer, and then gradually added more and more details to our model to account for differences between the idealized model and real-world AI systems, we would end up with an accurate theoretical system for predicting the behaviors AGI would exhibit. It would be a similar process to how physicists start with an idealized problem setup and add in details like friction or relativistic corrections.
But that isn’t what ended up happening. Agent foundations researchers ended up getting stuck on the cluster of problems collectively described as embedded agency, unable to square the dualistic assumptions of expected utility theory and Bayesianism with the embedded structure of real-world AI systems. The sub-problems of embedded agency are many and too varied to allow one elegant theorem to fix everything. Instead, they point to a fundamental flaw in the expected utility maximizer model, suggesting that it isn’t as widely applicable as early AI safety researchers thought.
The failure of the agent foundations agenda has led me to believe that expected utility maximization is only a good approximation for mostly-unembedded systems, and that an accurate theoretical model of advanced AI behavior (if such a thing is possible) would require a fundamentally different, less dualistic set of concepts. Coherence theorems and decision-theoretic arguments still rely on the old, unembedded assumptions and therefore don’t provide an accurate predictive model.
I agree that the agent-foundations research has been somewhat misaimed from the start, but I buy this explanation of John’s regarding where it went wrong and how to fix it. Basically, what we need to figure out is a theory of embedded world-modeling, which would capture the aspect of reality where the universe naturally decomposes into hierarchically arranged sparsely interacting subsystems. Our agent would then be a perfect game-theoretic agent, but defined over that abstract (and lazy) world-model, rather than over the world directly.
This would take care of agents needing to be “bigger” than the universe, counterfactuals, the “outside-view” problem, the realizability and the self-reference problems, the problem of hypothesis spaces, and basically everything else that’s problematic about embedded agency.
A theory of embedded world-modeling would be an improvement over current predictive models of advanced AI behavior, but it wouldn’t be the whole story. Game theory makes dualistic assumptions too (e.g., by treating the decision process as not having side effects), so we would also have to rewrite it into an embedded model of motivation.
Cartesian frames are one of the few lines of agent foundations research in the past few years that seem promising, due to allowing for greater flexibility in defining agent-environment boundaries. Preferably, we would have a model that lets us avoid having to postulate an agent-environment boundary at all. Combining a successor to Cartesian frames with an embedded theory of motivation, likely some form of active inference, might give us an accurate overarching theory of embedded behavior.
It turns out in an idealized model of intelligent AI, we can remove the dualistic assumptions of game theory by instead positing a reflective oracle, and the reflective oracle is allowed randomness in the territory (it is not just uncertainty in the map) to prevent paradoxes, and in particular the reflective oracle’s randomized answers are exactly the Nash-Equilibria of game theory, because there is a one-to-one function between a reflective oracle and a Nash-equilibrium.
Of course, whether it can transfer to our reality at all is pretty sketchy at best, but at least there is a solution at all:
https://arxiv.org/abs/1508.04145
The reflective oracle model doesn’t have all the properties I’m looking for—it still has the problem of treating utility as the optimization target rather than as a functional component of an iterative behavior reinforcement process. It also treats the utilities of different world-states as known ahead of time, rather than as the result of a search process, and assumes that computation is cost-free. To get a fully embedded theory of motivation, I expect that you would need something fundamentally different from classical game theory. For example, it probably wouldn’t use utility functions.
Re treating utility as the optimization target, I think this isn’t properly speaking an embedded agency problem, but rather an empirical problem of what the first AIs that automate everything will look like algorithmically, as there are algorithms that are able to be embedded in reality that do optimize the utility/reward like MCTS, and TurnTrout limits the post to the model-free policy gradient case like PPO and REINFORCE.
TurnTrout is correct to point out that not all RL algorithms optimize for the reward, and reward isn’t what the agent optimizes for by definition, but I think that it’s too limited in describing when RL does optimize for the utility/reward.
So I think the biggest difference between @TurnTrout and people like @gwern et al is whether or not model-based RL that does plan or model-free RL policy gradient algorithms come to dominate AI progress over the next decade.
Agree that the fact that it treats utilities of different world states as known and that the cost of computation is free makes it a very unrealistic model for human beings, and while something like the reflective oracle model is a possibility if we warped the laws of physics severely enough, such that we don’t have to care about the cost of computation at all, which then allows us to go from treating utilities as unknown to known in 1 step, this is an actual reason why I don’t expect the reflective oracle model to transfer to reality at all.