Thanks, this is a very clear framework for understanding your objection. Here’s the first counterargument that comes to mind: Minimax search is a theoretically optimal algorithm for playing chess, but is too computationally costly to implement in practice. One could therefore argue that all that matters is computationally feasible heuristics, and modeling an ideal chess player as executing a minimax search adds nothing to our knowledge of chess. OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple board-evaluation heuristic at the leaf nodes, is a pretty decent algorithm in practice.
Furthermore, it seems like there’s a pattern where, the more general the algorithmic problem you want to solve is, the more your solution is compelled to resemble some sort of brute-force search. There are all kinds of narrow abilities we’d like an AGI to have that depend on the detailed structure of the physical world, but it’s not obvious that any such structure, beyond hypotheses about what is feasibly computable, could be usefully exploited to solve the kinds of problem laid out in this sequence. So it may well be that the best approach turns out to involve some sort of bounded search over simpler strategies, plus lots and lots of compute.
OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple board-evaluation heuristic at the leaf nodes, is a pretty decent algorithm in practice.
I’ve written previously about this kind of argument—see here (scroll down to the non-blockquoted text). tl;dr we can often describe the same optimum in multiple ways, with each way giving us a different series that approximates the optimum in the limit. Whether any one series does well or poorly when truncated to N terms can’t be explained by saying “it’s a truncation of the optimum,” since they all are; these truncations properties are facts about the different series, not about the optimum. I illustrate with different series expansions for π.
Furthermore, it seems like there’s a pattern where, the more general the algorithmic problem you want to solve is, the more your solution is compelled to resemble some sort of brute-force search.
You may be right, and there are interesting conversations to be had about when solutions will tend to look like search and when they won’t. But this doesn’t feel like it really addresses my argument, which is not about “what kind of algorithm should you use” but about the weirdness of the injunction to optimize over a space containing every procedure you could ever do, including all of the optimization procedures you could ever do. There is a logical / definitional weirdness here that can’t be resolved by arguments about what sorts of (logically / definitionally unproblematic) algorithms are good or bad in what domains.
...the weirdness of the injunction to optimize over a space containing every procedure you could ever do, including all of the optimization procedures you could ever do.
My most recent preprint discusses multi-agent Goodhart ( https://arxiv.org/abs/1810.10862 ) and uses the example of poker, along with a different argument somewhat related to the embedded agent problem, to say why the optimization over strategies needs to include optimizing over the larger solution space.
To summarize and try to clarify how I think it relates, strategies for game-playing must at least implicitly include a model of the other player’s actions, so that an agent can tell which strategies will work against them. We need uncertainty in that model, because if we do something silly like assume they are rational Bayesian agents, we are likely to act non-optimally against their actual strategy. But the model of the other agent itself needs to account for their model of our strategy, including uncertainty about our search procedure for strategies—otherwise the space is clearly much too large to optimize over.
Does this make sense? (I may need to expand on this and clarify my thinking...)
Thanks, this is a very clear framework for understanding your objection. Here’s the first counterargument that comes to mind: Minimax search is a theoretically optimal algorithm for playing chess, but is too computationally costly to implement in practice. One could therefore argue that all that matters is computationally feasible heuristics, and modeling an ideal chess player as executing a minimax search adds nothing to our knowledge of chess. OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple board-evaluation heuristic at the leaf nodes, is a pretty decent algorithm in practice.
Furthermore, it seems like there’s a pattern where, the more general the algorithmic problem you want to solve is, the more your solution is compelled to resemble some sort of brute-force search. There are all kinds of narrow abilities we’d like an AGI to have that depend on the detailed structure of the physical world, but it’s not obvious that any such structure, beyond hypotheses about what is feasibly computable, could be usefully exploited to solve the kinds of problem laid out in this sequence. So it may well be that the best approach turns out to involve some sort of bounded search over simpler strategies, plus lots and lots of compute.
I’ve written previously about this kind of argument—see here (scroll down to the non-blockquoted text). tl;dr we can often describe the same optimum in multiple ways, with each way giving us a different series that approximates the optimum in the limit. Whether any one series does well or poorly when truncated to N terms can’t be explained by saying “it’s a truncation of the optimum,” since they all are; these truncations properties are facts about the different series, not about the optimum. I illustrate with different series expansions for π.
You may be right, and there are interesting conversations to be had about when solutions will tend to look like search and when they won’t. But this doesn’t feel like it really addresses my argument, which is not about “what kind of algorithm should you use” but about the weirdness of the injunction to optimize over a space containing every procedure you could ever do, including all of the optimization procedures you could ever do. There is a logical / definitional weirdness here that can’t be resolved by arguments about what sorts of (logically / definitionally unproblematic) algorithms are good or bad in what domains.
My most recent preprint discusses multi-agent Goodhart ( https://arxiv.org/abs/1810.10862 ) and uses the example of poker, along with a different argument somewhat related to the embedded agent problem, to say why the optimization over strategies needs to include optimizing over the larger solution space.
To summarize and try to clarify how I think it relates, strategies for game-playing must at least implicitly include a model of the other player’s actions, so that an agent can tell which strategies will work against them. We need uncertainty in that model, because if we do something silly like assume they are rational Bayesian agents, we are likely to act non-optimally against their actual strategy. But the model of the other agent itself needs to account for their model of our strategy, including uncertainty about our search procedure for strategies—otherwise the space is clearly much too large to optimize over.
Does this make sense? (I may need to expand on this and clarify my thinking...)