Search only generalizes well when you are able to accurately determine the available options, the consequences of selecting those options, and the utility of those consequences. It’s extremely unclear whether a mesa-optimizer would be able to do all three of these things well enough for search to actually generalize. Selection and Control makes some similar points.
We are already encountering some problems, however—Go, Chess, and Shogi, for example—for which this approach does not scale.
There should be a lot of caveats to this:
I’m pretty sure that even if you remove the MCTS at test time, AlphaZero will be very good at the game. (I’m pretty sure we could find numbers for this somewhere if it was a crux. I spent two minutes looking and didn’t find them.)
I’d also bet that with more compute and a larger model, AlphaZero (even without MCTS at test time) would continue improving.
AlphaZero assumes access to a perfect simulator of the environment (i.e. the rules of the game), which is why hardcoded search generalizes correctly. It’s not clear what would happen if you forced AlphaZero to also learn the rules of the game. That’s the setting used in Dota and StarCraft, and notably in both of those environments we did not use the AlphaZero approach, and we did see that it was “generally favorable for most of the optimization work to be done by the base optimizer”. (Unless you think that OpenAI Five / AlphaStar did in fact have mesa-optimizers, and we can’t tell because the neural nets are opaque.)
Arguably, this sort of task is only adequately solvable this way—if it were possible to train a straightforward DQN agent to perform well at Chess, it plausibly would have to learn to internally perform something like a tree search, producing a mesa-optimizer.
Given enough time and space, a DQN agent turns into a lookup table, which could encode the optimal policy for chess. I’d appreciate a rewrite of the sentence, or a footnote that says something to the effect of “assuming reasonable time and space limitations on the agent”.
I also disagree with the spirit of the sentence. My intuition is that with sufficient model capacity and training time (say, all of the computing resources in the world today), you could get a very large bundle of learned heuristics that plays chess well. (Depending on your threshold for “plays chess well”, AlphaZero without MCTS at test time might already reach it.) Of course, you can always amplify any such agent by throwing a small hardcoded MCTS or alpha-beta tree search on top of it, and so I’d expect the best agent at any given level of compute to be something of that form.
I believe AlphaZero without MCTS is still very good but not superhuman—International Master level, I believe. That being said, it’s unclear how much optimization/search is currently going on inside of AlphaZero’s policy network. My suspicion would be that currently it does some, and that to perform at the same level as the full AlphaZero it would have to perform more.
I added a footnote regarding capacity limitations (though editing doesn’t appear to be working for me right now—it should show up in a bit). As for the broader point, I think it’s just a question of degree—for a sufficiently diverse environment, you can do pretty well with just heuristics, you do better introducing optimization, and you keep getting better as you keep doing more optimization. So the question is just what does “perform well” mean and what threshold are you drawing for “internally performs something like a tree search.”
you can do pretty well with just heuristics, you do better introducing optimization, and you keep getting better as you keep doing more optimization.
I agree with this, but I don’t think it’s the point that I’m making; my claim is more that “just heuristics” is enough for arbitrary levels of performance (even if you could improve that by adding hardcoded optimization).
So the question is just what does “perform well” mean and what threshold are you drawing for “internally performs something like a tree search.”
I don’t think my claim depends much on the threshold of “perform well”, and I suspect that if you do think the current model is performing something like a tree search, you could make the model larger and run the same training process and it would no longer perform something like a tree search.
my claim is more that “just heuristics” is enough for arbitrary levels of performance (even if you could improve that by adding hardcoded optimization).
This claim seems incorrect for at least some tasks (if you already think that, skip the rest of this comment).
Consider the following 2-player turn-based zero-sum game as an example for a task in which “heuristics” seemingly can’t replace a tree search.
The game starts with an empty string. In each turn the following things happen:
(1) the player adds to the end of the string either “A” or “B”.
(2) the string is replaced with its SHA256 hash.
Player 1 wins iff after 10 turns the first bit in the binary representation of the string is 1.
(Alternatively, consider the 1-player version of this game, starting with a random string.)
I meant that claim to apply to “realistic” tasks (which I don’t yet know how to define).
Machine learning seems hard to do without search, if that counts as a “realistic” task. :)
I wonder if you can say something about what your motivation is to talk about this, i.e., are there larger implications if “just heuristics” is enough for arbitrary levels of performance on “realistic” tasks?
Machine learning seems hard to do without search, if that counts as a “realistic” task. :)
Humans and systems produced by meta learning both do reasonably well at learning, and don’t do “search” (depending on how loose you are with your definition of “search”).
I wonder if you can say something about what your motivation is to talk about this, i.e., are there larger implications if “just heuristics” is enough for arbitrary levels of performance on “realistic” tasks?
It’s plausible to me that for tasks that we actually train on, we end up creating systems that are like mesa optimizers in the sense that they have broad capabilities that they can use on relatively new domains that they haven’t had much experience on before, but nonetheless because they aren’t made up of a two clean parts (mesa objective + capabilities) there isn’t a single obvious mesa objective that the AI system is optimizing for off distribution. I’m not sure what happens in this regime, but it seems like it undercuts the mesa optimization story as told in this sequence.
Fwiw, on the original point, even standard machine learning algorithms (not the resulting models) don’t seem like “search” to me, though they also aren’t just a bag of heuristics and they do have a clearly delineated objective, so they fit well enough in the mesa optimization story.
(Also, reading back through this comment thread, I’m no longer sure whether or not a neural net could learn to play at least the 1-player random version of the SHA game. Certainly in the limit it can just memorize the input-output table, but I wouldn’t be surprised if it could get some accuracy even without that.)
It’s plausible to me that for tasks that we actually train on, we end up creating systems that are like mesa optimizers in the sense that they have broad capabilities that they can use on relatively new domains that they haven’t had much experience on before, but nonetheless because they aren’t made up of a two clean parts (mesa objective + capabilities) there isn’t a single obvious mesa objective that the AI system is optimizing for off distribution.
Coming back to this, can you give an example of the kind of thing you’re thinking of (in humans, animals, current ML systems)? Or other reason you think this could be the case in the future?
Also, do you think this will be significantly more efficient than “two clean parts (mesa objective + capabilities)”? (If not, it seems like we can use inner alignment techniques, e.g., transparency and verification, to force the model to be “two clean parts” if that’s better for safety.)
Coming back to this, can you give an example of the kind of thing you’re thinking of (in humans, animals, current ML systems)?
Humans don’t seem to have one mesa objective that we’re optimizing for. Even in this community, we tend to be uncertain about what our actual goal is, and most other people don’t even think about it. Humans do lots of things that look like “changing their objective”, e.g. maybe someone initially wants to have a family but then realizes they want to devote their life to public service because it’s more fulfilling.
Also, do you think this will be significantly more efficient than “two clean parts (mesa objective + capabilities)”?
I suspect it would be more efficient, but I’m not sure. (Mostly this is because humans and animals don’t seem to have two clean parts, but quite plausibly we’ll do something more interpretable than evolution and that will push towards a clean separation.) I also don’t know whether it would be better for safety to have it split into two clean parts.
Humans do lots of things that look like “changing their objective” [...]
That’s true but unless the AI is doing something like human imitation or metaphilosophy (in other words, we have some reason to think that the AI will converge to the “right” values), it seems dangerous to let it “changing their objective” on its own. Unless, I guess, it’s doing something like mild optimization or following norms, so that it can’t do much damage even if it switches to a wrong objective, and we can just shut it down and start over. But if it’s as messy as humans are, how would we know that it’s strictly following norms or doing mild optimization, and won’t “change its mind” about that too at some point (kind of like a human who isn’t very strategic suddenly has an insight or reads something on the Internet and decides to become strategic)?
I think overall I’m still confused about your perspective here. Do you think this kind of “messy” AI is something we should try to harness and turn into a safety success story (if so how), or do you think it’s a danger that we should try to avoid (which may for example have to involve global coordination because it might be more efficient than safer AIs that do have clean separation)?
Oh, going back to an earlier comment, I guess you’re suggesting some of each: try to harness at lower capability levels, and coordinate to avoid at higher capability levels.
In this entire comment thread I’m not arguing that mesa optimizers are safe, or proposing courses of action we should take to make mesa optimization safe. I’m simply trying to forecast what mesa optimizers will look like if we follow the default path. As I said earlier,
I’m not sure what happens in this regime, but it seems like it undercuts the mesa optimization story as told in this sequence.
It’s very plausible that the mesa optimizers I have in mind are even more dangerous, e.g. because they “change their objective”. It’s also plausible that they’re safer, e.g. because they are full-blown explicit EU maximizers and we can “convince” them to adopt goals similar to ours.
Mostly I’m saying these things because I think the picture presented in this sequence is not fully accurate, and I would like it to be more accurate. Having an accurate view of what problems will arise in the future tends to help with figuring out solutions to those problems.
Humans and systems produced by meta learning both do reasonably well at learning, and don’t do “search” (depending on how loose you are with your definition of “search”).
Part of what inspired me to write my comment was watching my kid play logic puzzles. When she starts a new game, she has to do a lot of random trial-and-error with backtracking, much like MCTS. (She does the trial-and-error on the physical game board, but when I play I often just do it in my head.) Then her intuition builds up and she can start to recognize solutions earlier and earlier in the search tree, sometimes even immediately upon starting a new puzzle level. Then the game gets harder (the puzzle levels slowly increase in difficulty) or moves to a new regime where her intuitions don’t work, and she has to do more trial-and-error again, and so on. This sure seems like “search” to me.
Fwiw, on the original point, even standard machine learning algorithms (not the resulting models) don’t seem like “search” to me, though they also aren’t just a bag of heuristics and they do have a clearly delineated objective, so they fit well enough in the mesa optimization story.
This really confuses me. Maybe with some forms of supervised learning you can either calculate the solution directly, or just follow a gradient (which may be arguable whether that’s search or not), but with RL, surely the “explore” steps have to count as “search”? Do you have a different kind of thing in mind when you think of “search”?
I agree that if you have a model of the system (as you do when you know the rules of the game), you can simulate potential actions and consequences, and that seems like search.
Usually, you don’t have a good model of the system, and then you need something else.
Maybe with some forms of supervised learning you can either calculate the solution directly, or just follow a gradient (which may be arguable whether that’s search or not), but with RL, surely the “explore” steps have to count as “search”?
I was thinking of following a gradient in supervised learning.
I agree that pure reinforcement learning with a sparse reward looks like search. I doubt that pure RL with sparse reward is going to get you very far.
Reinforcement learning with demonstrations or a very dense reward doesn’t really look like search, it looks more like someone telling you what to do and you following the instructions faithfully.
Search only generalizes well when you are able to accurately determine the available options, the consequences of selecting those options, and the utility of those consequences. It’s extremely unclear whether a mesa-optimizer would be able to do all three of these things well enough for search to actually generalize. Selection and Control makes some similar points.
There should be a lot of caveats to this:
I’m pretty sure that even if you remove the MCTS at test time, AlphaZero will be very good at the game. (I’m pretty sure we could find numbers for this somewhere if it was a crux. I spent two minutes looking and didn’t find them.)
I’d also bet that with more compute and a larger model, AlphaZero (even without MCTS at test time) would continue improving.
AlphaZero assumes access to a perfect simulator of the environment (i.e. the rules of the game), which is why hardcoded search generalizes correctly. It’s not clear what would happen if you forced AlphaZero to also learn the rules of the game. That’s the setting used in Dota and StarCraft, and notably in both of those environments we did not use the AlphaZero approach, and we did see that it was “generally favorable for most of the optimization work to be done by the base optimizer”. (Unless you think that OpenAI Five / AlphaStar did in fact have mesa-optimizers, and we can’t tell because the neural nets are opaque.)
Given enough time and space, a DQN agent turns into a lookup table, which could encode the optimal policy for chess. I’d appreciate a rewrite of the sentence, or a footnote that says something to the effect of “assuming reasonable time and space limitations on the agent”.
I also disagree with the spirit of the sentence. My intuition is that with sufficient model capacity and training time (say, all of the computing resources in the world today), you could get a very large bundle of learned heuristics that plays chess well. (Depending on your threshold for “plays chess well”, AlphaZero without MCTS at test time might already reach it.) Of course, you can always amplify any such agent by throwing a small hardcoded MCTS or alpha-beta tree search on top of it, and so I’d expect the best agent at any given level of compute to be something of that form.
I believe AlphaZero without MCTS is still very good but not superhuman—International Master level, I believe. That being said, it’s unclear how much optimization/search is currently going on inside of AlphaZero’s policy network. My suspicion would be that currently it does some, and that to perform at the same level as the full AlphaZero it would have to perform more.
I added a footnote regarding capacity limitations (though editing doesn’t appear to be working for me right now—it should show up in a bit). As for the broader point, I think it’s just a question of degree—for a sufficiently diverse environment, you can do pretty well with just heuristics, you do better introducing optimization, and you keep getting better as you keep doing more optimization. So the question is just what does “perform well” mean and what threshold are you drawing for “internally performs something like a tree search.”
I agree with this, but I don’t think it’s the point that I’m making; my claim is more that “just heuristics” is enough for arbitrary levels of performance (even if you could improve that by adding hardcoded optimization).
I don’t think my claim depends much on the threshold of “perform well”, and I suspect that if you do think the current model is performing something like a tree search, you could make the model larger and run the same training process and it would no longer perform something like a tree search.
This claim seems incorrect for at least some tasks (if you already think that, skip the rest of this comment).
Consider the following 2-player turn-based zero-sum game as an example for a task in which “heuristics” seemingly can’t replace a tree search.
The game starts with an empty string. In each turn the following things happen:
(1) the player adds to the end of the string either “A” or “B”.
(2) the string is replaced with its SHA256 hash.
Player 1 wins iff after 10 turns the first bit in the binary representation of the string is 1.
(Alternatively, consider the 1-player version of this game, starting with a random string.)
Yeah, agreed; I meant that claim to apply to “realistic” tasks (which I don’t yet know how to define).
Machine learning seems hard to do without search, if that counts as a “realistic” task. :)
I wonder if you can say something about what your motivation is to talk about this, i.e., are there larger implications if “just heuristics” is enough for arbitrary levels of performance on “realistic” tasks?
Humans and systems produced by meta learning both do reasonably well at learning, and don’t do “search” (depending on how loose you are with your definition of “search”).
It’s plausible to me that for tasks that we actually train on, we end up creating systems that are like mesa optimizers in the sense that they have broad capabilities that they can use on relatively new domains that they haven’t had much experience on before, but nonetheless because they aren’t made up of a two clean parts (mesa objective + capabilities) there isn’t a single obvious mesa objective that the AI system is optimizing for off distribution. I’m not sure what happens in this regime, but it seems like it undercuts the mesa optimization story as told in this sequence.
Fwiw, on the original point, even standard machine learning algorithms (not the resulting models) don’t seem like “search” to me, though they also aren’t just a bag of heuristics and they do have a clearly delineated objective, so they fit well enough in the mesa optimization story.
(Also, reading back through this comment thread, I’m no longer sure whether or not a neural net could learn to play at least the 1-player random version of the SHA game. Certainly in the limit it can just memorize the input-output table, but I wouldn’t be surprised if it could get some accuracy even without that.)
Coming back to this, can you give an example of the kind of thing you’re thinking of (in humans, animals, current ML systems)? Or other reason you think this could be the case in the future?
Also, do you think this will be significantly more efficient than “two clean parts (mesa objective + capabilities)”? (If not, it seems like we can use inner alignment techniques, e.g., transparency and verification, to force the model to be “two clean parts” if that’s better for safety.)
Humans don’t seem to have one mesa objective that we’re optimizing for. Even in this community, we tend to be uncertain about what our actual goal is, and most other people don’t even think about it. Humans do lots of things that look like “changing their objective”, e.g. maybe someone initially wants to have a family but then realizes they want to devote their life to public service because it’s more fulfilling.
I suspect it would be more efficient, but I’m not sure. (Mostly this is because humans and animals don’t seem to have two clean parts, but quite plausibly we’ll do something more interpretable than evolution and that will push towards a clean separation.) I also don’t know whether it would be better for safety to have it split into two clean parts.
That’s true but unless the AI is doing something like human imitation or metaphilosophy (in other words, we have some reason to think that the AI will converge to the “right” values), it seems dangerous to let it “changing their objective” on its own. Unless, I guess, it’s doing something like mild optimization or following norms, so that it can’t do much damage even if it switches to a wrong objective, and we can just shut it down and start over. But if it’s as messy as humans are, how would we know that it’s strictly following norms or doing mild optimization, and won’t “change its mind” about that too at some point (kind of like a human who isn’t very strategic suddenly has an insight or reads something on the Internet and decides to become strategic)?
I think overall I’m still confused about your perspective here. Do you think this kind of “messy” AI is something we should try to harness and turn into a safety success story (if so how), or do you think it’s a danger that we should try to avoid (which may for example have to involve global coordination because it might be more efficient than safer AIs that do have clean separation)?
Oh, going back to an earlier comment, I guess you’re suggesting some of each: try to harness at lower capability levels, and coordinate to avoid at higher capability levels.
In this entire comment thread I’m not arguing that mesa optimizers are safe, or proposing courses of action we should take to make mesa optimization safe. I’m simply trying to forecast what mesa optimizers will look like if we follow the default path. As I said earlier,
It’s very plausible that the mesa optimizers I have in mind are even more dangerous, e.g. because they “change their objective”. It’s also plausible that they’re safer, e.g. because they are full-blown explicit EU maximizers and we can “convince” them to adopt goals similar to ours.
Mostly I’m saying these things because I think the picture presented in this sequence is not fully accurate, and I would like it to be more accurate. Having an accurate view of what problems will arise in the future tends to help with figuring out solutions to those problems.
Part of what inspired me to write my comment was watching my kid play logic puzzles. When she starts a new game, she has to do a lot of random trial-and-error with backtracking, much like MCTS. (She does the trial-and-error on the physical game board, but when I play I often just do it in my head.) Then her intuition builds up and she can start to recognize solutions earlier and earlier in the search tree, sometimes even immediately upon starting a new puzzle level. Then the game gets harder (the puzzle levels slowly increase in difficulty) or moves to a new regime where her intuitions don’t work, and she has to do more trial-and-error again, and so on. This sure seems like “search” to me.
This really confuses me. Maybe with some forms of supervised learning you can either calculate the solution directly, or just follow a gradient (which may be arguable whether that’s search or not), but with RL, surely the “explore” steps have to count as “search”? Do you have a different kind of thing in mind when you think of “search”?
I agree that if you have a model of the system (as you do when you know the rules of the game), you can simulate potential actions and consequences, and that seems like search.
Usually, you don’t have a good model of the system, and then you need something else.
I was thinking of following a gradient in supervised learning.
I agree that pure reinforcement learning with a sparse reward looks like search. I doubt that pure RL with sparse reward is going to get you very far.
Reinforcement learning with demonstrations or a very dense reward doesn’t really look like search, it looks more like someone telling you what to do and you following the instructions faithfully.