But for the purpose of analyzing it’s output, I don’t think this discussion is critical if we agree that we can expect a good heuristic search through models will identify any model that a human could hypothesize.
I think I would expect essentially all models that a human could hypothesize to be in the search space—but if you’re doing a local search, then you only ever really see the easiest to find model with good behavior, not all models with good behavior, which means you’re relying a lot more on your prior/inductive biases/whatever is determining how hard models are to find to do a lot more work for you. Cast into the Bayesian setting, a local search like this is relying on something like the MAP model not being deceptive—and escaping that to instead get N models sampled independently from the top q proportion or whatever seems very difficult to do via any local search algorithm.
I think that arbitrary limits on heuristic search of the form “the next model I consider must be fairly close to the last one I did” will not help it very much if it’s anywhere near smart enough to merit membership in a generally intelligent predictor.
Yeah; I think I would say I disagree with that. Notably, evolution is not a generally intelligent predictor, but is still capable of producing generally intelligent predictors. I expect the same to be true of processes like SGD.
If we ever produce generally intelligent predictors (or “accurate world-models” in the terminology we’ve been using so far), we will need a process that is much more efficient than evolution.
But also, I certainly don’t think that in order to be generally intelligent you need to start with a generally intelligent subroutine. Then you could never get off the ground. I expect good hypothesis-generation / model-proposal to use a mess of learned heuristics which would not be easily directed to solve arbitrary tasks, and I expect the heuristic “look for models near the best-so-far model” to be useful, but I don’t think making it ironclad would be useful.
Another thought on our exchange:
Me: we can expect a good heuristic search through models will identify any model that a human could hypothesize
You: I think I would expect essentially all models that a human could hypothesize to be in the search space—but if you’re doing a local search, then you only ever really see the easiest to find model with good behavior
If what you say is correct, then it sounds like exclusively-local search precludes human-level intelligence! (Which I don’t believe, by the way, even if I think it’s a less efficient path). One human competency is generating lots of hypotheses, and then having many models of the world, and then designing experiments to probe those hypotheses. It’s hard for me to imagine that an agent that finds an “easiest-to-find model” and then calls it a day could ever do human-level science. Even something as simple as understanding an interlocuter requires generating diverse models on the fly: “Do they mean X or Y with those words? Let me ask a clarfying question.”
I’m not this bearish on local search. But if local search is this bad, I don’t think it is a viable path to AGI, and if it’s not, then the internals don’t for the purposes of our discussion, and we can skip to what I take to be the upshot:
we can expect a good heuristic search through models will identify any model that a human could hypothesize
It’s hard for me to imagine that an agent that finds an “easiest-to-find model” and then calls it a day could ever do human-level science.
I certainly don’t think SGD is a powerful enough optimization process to do science directly, but it definitely seems powerful enough to find an agent which does do science.
if local search is this bad, I don’t think it is a viable path to AGI
We know that local search processes can produce AGI, so viability is a question of efficiency—and we know that SGD is at least efficient enough to solve a wide variety of problems from image classification, to language modeling, to complex video games, all given just current compute budgets. So while I could certainly imagine SGD being insufficient, I definitely wouldn’t want to bet on it.
I certainly don’t think SGD is a powerful enough optimization process to do science directly, but it definitely seems powerful enough to find an agent which does do science.
Okay I think we’ve switched from talking about Q-learning to talking about policy gradient. (Or we were talking about the latter the whole time, and I didn’t notice it). The question that I think is relevant is: how are possible world-models being hypothesized and analyzed? That’s something I expect to be done with messy heuristics that sometimes have discontinuities their sequence of outputs. Which means I think that no reasonable DQN is will be generally intelligent (except maybe an enormously wide one attention-based one, such that finding models is more about selective attention at any given step than it is about gradient descent over the whole history).
A policy gradient network, on the other hand, could maybe (after having its parameters updated through gradient descent) become a network that, in a single forward pass, considers diverse world-models (generated with a messy non-local heuristic), and analyzes their plausibility, and then acts. At the end of the day, what we have is an agent modeling world, and we can expect it to consider any model that a human could come up with. (This paragraph also applies to the DQN with a gradient-descent-trained method for selectively attending to different parts of a wide network, since that could amount to effectively considering different models).
Hmmm… I don’t think I was ever even meaning to talk specifically about RL, but regardless I don’t expect nearly as large of a difference between Q-learning and policy gradient algorithms. If we imagine both types of algorithms making use of the same size massive neural network, the only real difference is how the output of that neural network is interpreted, either directly as a policy, or as Q values that are turned into a policy via something like softmax. In both cases, the neural network is capable of implementing any arbitrary policy and should be getting a similar sort of feedback signal from the training process—especially if you’re using a policy gradient algorithm that involves something like advantage estimation rather than actual rollouts, since the update rule in that situation is going to look very similar to the Q learning update rule. I do expect some minor differences in the sorts of models you end up with, such as Q learning being more prone to non-myopic behavior across episodes, and I think there are some minor reasons that policy gradient algorithms are favored in real-world settings, since they get to learn their exploration policy rather than having it hard-coded and can handle continuous action domains—but overall I think these sorts of differences are pretty minor and shouldn’t affect whether these approaches can reach general intelligence or not.
I certainly don’t think SGD is a powerful enough optimization process to do science directly, but it definitely seems powerful enough to find an agent which does do science.
But taking us back out of RL, in a wide neural network with selective attention that enables many qualitatively different forward passes, gradient descent seems to be training the way different models get proposed (i.e. the way attention is allocated), since this happens in a single forward pass, and what we’re left with is a modeling routine that is heuristically considering (and later comparing) very different models. And this should include any model that a human would consider.
I think that is main thread of our argument, but now I’m curious if I was totally off the mark about Q-learning and policy gradient.
but overall I think these sorts of differences are pretty minor and shouldn’t affect whether these approaches can reach general intelligence or not.
I had thought that maybe since a Q-learner is trained as if the cached point estimate of the Q-value of the next state is the Truth, it won’t, in a single forward pass, consider different models about what the actual Q-value of the next state is. At most, it will consider different models about what the very next transition will be.
a) Does that seem right? and b) Aren’t there some policy gradient methods that don’t face this problem?
I had thought that maybe since a Q-learner is trained as if the cached point estimate of the Q-value of the next state is the Truth, it won’t, in a single forward pass, consider different models about what the actual Q-value of the next state is. At most, it will consider different models about what the very next transition will be.
a) Does that seem right? and b) Aren’t there some policy gradient methods that don’t face this problem?
This seems wrong to me—even though the Q learner is trained using its own point estimate of the next state, it isn’t, at inference time, given access to that point estimate. The Q learner has to choose its Q values before it knows anything about what the Q value estimates will be of future states, which means it certainly should have to consider different models of what the next transition will be like.
it certainly should have to consider different models of what the next transition will be like.
Yeah I was agreeing with that.
even though the Q learner is trained using its own point estimate of the next state, it isn’t, at inference time, given access to that point estimate.
Right, but one thing the Q-network, in its forward pass, is trying to reproduce is the point of estimate of the Q-value of the next state (since it doesn’t have access to it). What it isn’t trying to reproduce, because it isn’t trained that way, is multiple models of what the Q-value might be at a given possible next state.
I think I would expect essentially all models that a human could hypothesize to be in the search space—but if you’re doing a local search, then you only ever really see the easiest to find model with good behavior, not all models with good behavior, which means you’re relying a lot more on your prior/inductive biases/whatever is determining how hard models are to find to do a lot more work for you. Cast into the Bayesian setting, a local search like this is relying on something like the MAP model not being deceptive—and escaping that to instead get N models sampled independently from the top q proportion or whatever seems very difficult to do via any local search algorithm.
So would you say you disagree with the claim
?
Yeah; I think I would say I disagree with that. Notably, evolution is not a generally intelligent predictor, but is still capable of producing generally intelligent predictors. I expect the same to be true of processes like SGD.
If we ever produce generally intelligent predictors (or “accurate world-models” in the terminology we’ve been using so far), we will need a process that is much more efficient than evolution.
But also, I certainly don’t think that in order to be generally intelligent you need to start with a generally intelligent subroutine. Then you could never get off the ground. I expect good hypothesis-generation / model-proposal to use a mess of learned heuristics which would not be easily directed to solve arbitrary tasks, and I expect the heuristic “look for models near the best-so-far model” to be useful, but I don’t think making it ironclad would be useful.
Another thought on our exchange:
If what you say is correct, then it sounds like exclusively-local search precludes human-level intelligence! (Which I don’t believe, by the way, even if I think it’s a less efficient path). One human competency is generating lots of hypotheses, and then having many models of the world, and then designing experiments to probe those hypotheses. It’s hard for me to imagine that an agent that finds an “easiest-to-find model” and then calls it a day could ever do human-level science. Even something as simple as understanding an interlocuter requires generating diverse models on the fly: “Do they mean X or Y with those words? Let me ask a clarfying question.”
I’m not this bearish on local search. But if local search is this bad, I don’t think it is a viable path to AGI, and if it’s not, then the internals don’t for the purposes of our discussion, and we can skip to what I take to be the upshot:
I certainly don’t think SGD is a powerful enough optimization process to do science directly, but it definitely seems powerful enough to find an agent which does do science.
We know that local search processes can produce AGI, so viability is a question of efficiency—and we know that SGD is at least efficient enough to solve a wide variety of problems from image classification, to language modeling, to complex video games, all given just current compute budgets. So while I could certainly imagine SGD being insufficient, I definitely wouldn’t want to bet on it.
Okay I think we’ve switched from talking about Q-learning to talking about policy gradient. (Or we were talking about the latter the whole time, and I didn’t notice it). The question that I think is relevant is: how are possible world-models being hypothesized and analyzed? That’s something I expect to be done with messy heuristics that sometimes have discontinuities their sequence of outputs. Which means I think that no reasonable DQN is will be generally intelligent (except maybe an enormously wide one attention-based one, such that finding models is more about selective attention at any given step than it is about gradient descent over the whole history).
A policy gradient network, on the other hand, could maybe (after having its parameters updated through gradient descent) become a network that, in a single forward pass, considers diverse world-models (generated with a messy non-local heuristic), and analyzes their plausibility, and then acts. At the end of the day, what we have is an agent modeling world, and we can expect it to consider any model that a human could come up with. (This paragraph also applies to the DQN with a gradient-descent-trained method for selectively attending to different parts of a wide network, since that could amount to effectively considering different models).
Hmmm… I don’t think I was ever even meaning to talk specifically about RL, but regardless I don’t expect nearly as large of a difference between Q-learning and policy gradient algorithms. If we imagine both types of algorithms making use of the same size massive neural network, the only real difference is how the output of that neural network is interpreted, either directly as a policy, or as Q values that are turned into a policy via something like softmax. In both cases, the neural network is capable of implementing any arbitrary policy and should be getting a similar sort of feedback signal from the training process—especially if you’re using a policy gradient algorithm that involves something like advantage estimation rather than actual rollouts, since the update rule in that situation is going to look very similar to the Q learning update rule. I do expect some minor differences in the sorts of models you end up with, such as Q learning being more prone to non-myopic behavior across episodes, and I think there are some minor reasons that policy gradient algorithms are favored in real-world settings, since they get to learn their exploration policy rather than having it hard-coded and can handle continuous action domains—but overall I think these sorts of differences are pretty minor and shouldn’t affect whether these approaches can reach general intelligence or not.
I interpreted this bit as talking about RL
But taking us back out of RL, in a wide neural network with selective attention that enables many qualitatively different forward passes, gradient descent seems to be training the way different models get proposed (i.e. the way attention is allocated), since this happens in a single forward pass, and what we’re left with is a modeling routine that is heuristically considering (and later comparing) very different models. And this should include any model that a human would consider.
I think that is main thread of our argument, but now I’m curious if I was totally off the mark about Q-learning and policy gradient.
I had thought that maybe since a Q-learner is trained as if the cached point estimate of the Q-value of the next state is the Truth, it won’t, in a single forward pass, consider different models about what the actual Q-value of the next state is. At most, it will consider different models about what the very next transition will be.
a) Does that seem right? and b) Aren’t there some policy gradient methods that don’t face this problem?
This seems wrong to me—even though the Q learner is trained using its own point estimate of the next state, it isn’t, at inference time, given access to that point estimate. The Q learner has to choose its Q values before it knows anything about what the Q value estimates will be of future states, which means it certainly should have to consider different models of what the next transition will be like.
Yeah I was agreeing with that.
Right, but one thing the Q-network, in its forward pass, is trying to reproduce is the point of estimate of the Q-value of the next state (since it doesn’t have access to it). What it isn’t trying to reproduce, because it isn’t trained that way, is multiple models of what the Q-value might be at a given possible next state.