Fwiw, I interpreted this as saying that it doesn’t work as a safety proposal (see also: my earlier comment). Also seems related to his arguments about ML systems having squiggles.
Yup. You can definitely train powerful systems on imitation of human thoughts, and in the limit this just gets you a powerful mesa-optimizer that figures out how to imitate them.
The question is when you get a misaligned mesaoptimizer relative to when you get superhuman behavior.
I think it’s pretty clear that you can get an optimizer which is upstream of the imitation (i.e. whose optimization gives rise to the imitation), or you can get an optimizer which is downstream of the imitation (i.e. which optimizes in virtue of its imitation). Of course most outcomes are messier than those two extremes, but the qualitative distinction still seems really central to these arguments.
I don’t think you’ve made much argument about when the transition occurs. Existing language models strongly appear to be “imitation upstream of optimization.” For example, it is much easier to get optimization out of them by having them imitate human optimization, than by setting up a situation where solving a hard problem is necessary to predict human behavior.
I don’t know when you expect this situation to change; if you want to make some predictions then you could use empirical data to help support your view. By default I would interpret each stronger system with “imitation upstream of optimization” to be weak evidence that the transition will be later than you would have thought. I’m not treating those as failed predictions by you or anything, but it’s the kind of evidence that adjusts my view on this question.
(I also think the chimp->human distinction being so closely coupled to language is further weak evidence for this view. But honestly the bigger thing I’m saying here is that 50% seems like a more reasonable place to be a priori, so I feel like the ball’s in your court to give an argument. I know you hate that move, sorry.)
Epistemic status: some of these ideas only crystallized today, normally I would take at least a few days to process before posting to make sure there are no glaring holes in the reasoning, but I saw this thread and decided to reply since it’s topical.
Suppose that your imitator works by something akin to Bayesian inference with some sort of bounded simplicity prior (I think it’s true of transformers). In order for Bayesian inference to converge to exact imitation, you usually need realizability. Obviously today we don’t have realizability because the ANNs currently in use are not big enough to contain a brain, but we’re gradually getting closer there[1].
More precisely, as ANNs grow in size we’re approaching a regime I dubbed “pseudorealizability”: on the one hand, the brain is in the prior[2], one the other hand, its description complexity is pretty high and therefore its prior probability is pretty low. Moreover, a more sophisticated agent (e.g. infra-Bayesian RL / Turing RL / infra-Bayesian physicalist) would be able to use the rest of world as useful evidence to predict some features of the human brain (i.e. even though human brains are complex, they are not random, there are reasons they came to be the way they are if you understand the broader context e.g. evolutionary history). But, the latter kind of inference does not take the form of having a (non-mesa-optimizing) complete cartesian parsimonious model of the world in which brains are a particular piece, because (i) such a model would be too computationally expensive (non-realizability) and (ii) bridge rules add a lot of complexity.
Hence, the honest-imitation hypothesis is heavily penalized compared to hypotheses that are in themselves agents which are more “epistemically sophisticated” than the outer loop of the AI. Why agents rather than some kind of non-agentic epistemic engines? Because, IB and IBP suggest that, this level of epistemic sophistication requires some entanglement between epistemic rationality and instrumental rationality: in these frameworks, it is not possible to decouple the two entirely.
From the perspective of the outer loop, we can describe the situation as: “I woke up, expecting to see a world that is (i) simple and (ii) computationally cheap. At first glance, the world seemed like, not that. But, everything became clear when I realized that the world is generated by a relatively-simple-and-cheap ‘deity’ who made the world like this on purpose because it’s beneficial for it from its own strange epistemic vantage point.”
Coming back to the question of, when to expect the transition from imitation-upstread-of-optimization to imitation-downstream-of-optimization. By the above line of argument, we should expect this transition to happen before the AI succeeds at any task which requires reasoning at least as sophisticated as the kind of reasoning that allows inferring properties of human brains from understanding the broader context of the world. Unfortunately, I cannot atm cache this out into a concrete milestone, but (i) it seems very believable that current language models are not there and (ii) maybe if we think about it more, we can come up with such a milestone.
Cotra’s report is a relevant point of reference, even though “having as many parameters as the brain according to some way to count brain-parameters” is ofc not the same as “capable of representing something which approximates the brain up to an error term that behaves like random noise”.
Assuming the training protocol is sufficiently good at decoupling the brain from the surrounding (more complex) world and pointing the AI at only trying to imitate the brain.
Hence, the honest-imitation hypothesis is heavily penalized compared to hypotheses that are in themselves agents which are more “epistemically sophisticated” than the outer loop of the AI.
In a deep learning context, the latter hypothesis seems much more heavily favored when using a simplicity prior (since gradient descent is simple to specify) than a speed prior (since gradient descent takes a lot of computation). So as long as the compute costs of inference remain smaller than the compute costs of training, a speed prior seems more appropriate for evaluating how easily hypotheses can become more epistemically sophisticated than the outer loop.
Not quite sure what you’re saying here. Is the claim that speed penalties would help shift the balance against mesa-optimizers? This kind of solutions are worth looking into, but I’m not too optimistic about them atm. First, the mesa-optimizer probably won’t add a lot of overhead compared to the considerable complexity of emulating a brain. In particular, it need not work by anything like our own ML algorithms. So, if it’s possible to rule out mesa-optimizers like this, it would require a rather extreme penalty. Second, there are limits on how much you can shape the prior while still having feasible learning. And I suspect that such an extreme speed penalty would not cut it. Third, depending on the setup, an extreme speed penalty might harm generalization[1]. But we definitely need to understand it more rigorously.
The most appealing version is Christiano’s “minimal circuits”, but that only works for inputs of fixed size. It’s not so clear what’s the variable-input-size (“transformer”) version of that.
No, I wasn’t advocating adding a speed penalty, I was just pointing at a reason to think that a speed prior would give a more accurate answer to the question of “which is favored” than the bounded simplicity prior you’re assuming:
Suppose that your imitator works by something akin to Bayesian inference with some sort of bounded simplicity prior (I think it’s true of transformers)
But now I realise that I don’t understand why you think this is true of transformers. Could you explain? It seems to me that there are many very simple hypotheses which take a long time to calculate, and which transformers therefore can’t be representing.
The word “bounded” in “bounded simplicity prior” referred to bounded computational resources. A “bounded simplicity prior” is a prior which involves either a “hard” (i.e. some hypotheses are excluded) or a “soft” (i.e. some hypotheses are down-weighted) bound on computational resources (or both), and also inductive bias towards simplicity (specifically it should probably behave as ~ 2^{-description complexity}). For a concrete example, see the prior I described here (w/o any claim to originality).
This seems like a good thing to keep in mind, but also sounds too pessimistic about the ability of gradient descent to find inference algorithms that update more efficiently than gradient descent.
I do expect this to happen. The question is merely: what’s the best predictor of how hard it is to find inference algorithms more efficient effective than gradient descent? Is it whether those inference algorithms are more complex than gradient descent? Or is it whether those inference algorithms run for longer than gradient descent? Since gradient descent is very simple but takes a long time to run, my bet is the latter: there are many simple ways to convert compute to optimisation, but few compute-cheap ways to convert additional complexity to optimization.
Faster than gradient descent is not a selective pressure, at least if we’re considering typical ML training procedures. What is a selective pressure is regularization, which functions much more like a complexity prior than a speed prior.
So (again sticking to modern day ML as an example, if you have something else in mind that would be interesting) of course there will be a cutoff in terms of speed, excluding all algorithms that don’t fit into the neural net. But among algorithms that fit into the NN, the penalty on their speed will be entirely explainable as a consequence of regularization that e.g. favors circuits that depend on fewer parameters, and would therefore be faster after some optimization steps.
If examples of successful parameters were sparse and tended to just barely fit into the NN, then this speed cutoff will be very important. But in the present day we see that good parameters tend to be pretty thick on the ground, and you can fairly smoothly move around in parameter space to make different tradeoffs.
Here’s my stab at rephrasing this argument without reference to IB. Would appreciate corrections, and any pointers on where you think the IB formalism adds to the pre-theoretic intuitions:
At some point imitation will progress to the point where models use information about the world to infer properties of the thing they’re trying to imitate (humans) -- e.g. human brains were selected under some energy efficiency pressure, and so have certain properties. The relationship between “things humans are observed to say/respond to” to “how the world works” is extremely complex. Imitation-downstream-of-optimization is simpler. What’s more, imitation-downstream-of-optimization can be used to model (some of) the same things the brain-in-world strategy can. A speculative example: a model learns that humans use a bunch of different reasoning strategies (deductive reasoning, visual-memory search, analogizing...) and does a search over these strategies to see which one best fits the current context. This optimization-to-find-imitation is simpler than learning the evolutionary/cultural/educational world model which explains why the human uses one strategy over another in a given context.
I must be missing something here. Isn’t optimizing necessary for superhuman behavior? So isn’t “superhuman behavior” a strictly stronger requirement than “being a mesaoptimizer”? So isn’t it clear which one happens first?
Fast imitations of subhuman behavior or imitations of augmented of humans are also superhuman. As is planning against a human-level imitation. And so on.
It’s unclear if systems trained in that way will be imitating a process that optimizes, or will be optimizing in order to imitate. (Presumably they are doing both to varying degrees.) I don’t think this can be settled a priori.
This “imitating an optimizer” / “optimizing to imitate” dichotomy seems unnecessarily confusing to me. Isn’t it just inner alignment / inner misalignment (with the human behavior you’re being trained on)? If you’re imitating an optimizer, you’re still an optimizer.
I agree with this. If the key idea is, for example, optimising imitators generalise better than imitations of optimisers, or for a second example that they pursue simpler goals, it seems to me that it’d be better just to draw distinctions based on generalisation or goal simplicity and not on optimising imitators/imitations of optimisers.
Sorry, I should be more specific. We are talking about AGI Safety, it seems unlikely that running narrow AI faster gets you AGI. I’m not sure if you disagree with that. I don’t understand what you mean by “imitations of augmented of humans” and “planning against a human-level imitation”.
Yup. You can definitely train powerful systems on imitation of human thoughts, and in the limit this just gets you a powerful mesa-optimizer that figures out how to imitate them.
The question is when you get a misaligned mesaoptimizer relative to when you get superhuman behavior.
I think it’s pretty clear that you can get an optimizer which is upstream of the imitation (i.e. whose optimization gives rise to the imitation), or you can get an optimizer which is downstream of the imitation (i.e. which optimizes in virtue of its imitation). Of course most outcomes are messier than those two extremes, but the qualitative distinction still seems really central to these arguments.
I don’t think you’ve made much argument about when the transition occurs. Existing language models strongly appear to be “imitation upstream of optimization.” For example, it is much easier to get optimization out of them by having them imitate human optimization, than by setting up a situation where solving a hard problem is necessary to predict human behavior.
I don’t know when you expect this situation to change; if you want to make some predictions then you could use empirical data to help support your view. By default I would interpret each stronger system with “imitation upstream of optimization” to be weak evidence that the transition will be later than you would have thought. I’m not treating those as failed predictions by you or anything, but it’s the kind of evidence that adjusts my view on this question.
(I also think the chimp->human distinction being so closely coupled to language is further weak evidence for this view. But honestly the bigger thing I’m saying here is that 50% seems like a more reasonable place to be a priori, so I feel like the ball’s in your court to give an argument. I know you hate that move, sorry.)
Epistemic status: some of these ideas only crystallized today, normally I would take at least a few days to process before posting to make sure there are no glaring holes in the reasoning, but I saw this thread and decided to reply since it’s topical.
Suppose that your imitator works by something akin to Bayesian inference with some sort of bounded simplicity prior (I think it’s true of transformers). In order for Bayesian inference to converge to exact imitation, you usually need realizability. Obviously today we don’t have realizability because the ANNs currently in use are not big enough to contain a brain, but we’re gradually getting closer there[1].
More precisely, as ANNs grow in size we’re approaching a regime I dubbed “pseudorealizability”: on the one hand, the brain is in the prior[2], one the other hand, its description complexity is pretty high and therefore its prior probability is pretty low. Moreover, a more sophisticated agent (e.g. infra-Bayesian RL / Turing RL / infra-Bayesian physicalist) would be able to use the rest of world as useful evidence to predict some features of the human brain (i.e. even though human brains are complex, they are not random, there are reasons they came to be the way they are if you understand the broader context e.g. evolutionary history). But, the latter kind of inference does not take the form of having a (non-mesa-optimizing) complete cartesian parsimonious model of the world in which brains are a particular piece, because (i) such a model would be too computationally expensive (non-realizability) and (ii) bridge rules add a lot of complexity.
Hence, the honest-imitation hypothesis is heavily penalized compared to hypotheses that are in themselves agents which are more “epistemically sophisticated” than the outer loop of the AI. Why agents rather than some kind of non-agentic epistemic engines? Because, IB and IBP suggest that, this level of epistemic sophistication requires some entanglement between epistemic rationality and instrumental rationality: in these frameworks, it is not possible to decouple the two entirely.
From the perspective of the outer loop, we can describe the situation as: “I woke up, expecting to see a world that is (i) simple and (ii) computationally cheap. At first glance, the world seemed like, not that. But, everything became clear when I realized that the world is generated by a relatively-simple-and-cheap ‘deity’ who made the world like this on purpose because it’s beneficial for it from its own strange epistemic vantage point.”
Coming back to the question of, when to expect the transition from imitation-upstread-of-optimization to imitation-downstream-of-optimization. By the above line of argument, we should expect this transition to happen before the AI succeeds at any task which requires reasoning at least as sophisticated as the kind of reasoning that allows inferring properties of human brains from understanding the broader context of the world. Unfortunately, I cannot atm cache this out into a concrete milestone, but (i) it seems very believable that current language models are not there and (ii) maybe if we think about it more, we can come up with such a milestone.
Cotra’s report is a relevant point of reference, even though “having as many parameters as the brain according to some way to count brain-parameters” is ofc not the same as “capable of representing something which approximates the brain up to an error term that behaves like random noise”.
Assuming the training protocol is sufficiently good at decoupling the brain from the surrounding (more complex) world and pointing the AI at only trying to imitate the brain.
In a deep learning context, the latter hypothesis seems much more heavily favored when using a simplicity prior (since gradient descent is simple to specify) than a speed prior (since gradient descent takes a lot of computation). So as long as the compute costs of inference remain smaller than the compute costs of training, a speed prior seems more appropriate for evaluating how easily hypotheses can become more epistemically sophisticated than the outer loop.
Not quite sure what you’re saying here. Is the claim that speed penalties would help shift the balance against mesa-optimizers? This kind of solutions are worth looking into, but I’m not too optimistic about them atm. First, the mesa-optimizer probably won’t add a lot of overhead compared to the considerable complexity of emulating a brain. In particular, it need not work by anything like our own ML algorithms. So, if it’s possible to rule out mesa-optimizers like this, it would require a rather extreme penalty. Second, there are limits on how much you can shape the prior while still having feasible learning. And I suspect that such an extreme speed penalty would not cut it. Third, depending on the setup, an extreme speed penalty might harm generalization[1]. But we definitely need to understand it more rigorously.
The most appealing version is Christiano’s “minimal circuits”, but that only works for inputs of fixed size. It’s not so clear what’s the variable-input-size (“transformer”) version of that.
No, I wasn’t advocating adding a speed penalty, I was just pointing at a reason to think that a speed prior would give a more accurate answer to the question of “which is favored” than the bounded simplicity prior you’re assuming:
But now I realise that I don’t understand why you think this is true of transformers. Could you explain? It seems to me that there are many very simple hypotheses which take a long time to calculate, and which transformers therefore can’t be representing.
The word “bounded” in “bounded simplicity prior” referred to bounded computational resources. A “bounded simplicity prior” is a prior which involves either a “hard” (i.e. some hypotheses are excluded) or a “soft” (i.e. some hypotheses are down-weighted) bound on computational resources (or both), and also inductive bias towards simplicity (specifically it should probably behave as ~ 2^{-description complexity}). For a concrete example, see the prior I described here (w/o any claim to originality).
Ah, I see. That makes sense now!
This seems like a good thing to keep in mind, but also sounds too pessimistic about the ability of gradient descent to find inference algorithms that update more efficiently than gradient descent.
I do expect this to happen. The question is merely: what’s the best predictor of how hard it is to find inference algorithms more
efficienteffective than gradient descent? Is it whether those inference algorithms are more complex than gradient descent? Or is it whether those inference algorithms run for longer than gradient descent? Since gradient descent is very simple but takes a long time to run, my bet is the latter: there are many simple ways to convert compute to optimisation, but few compute-cheap ways to convert additional complexity to optimization.Faster than gradient descent is not a selective pressure, at least if we’re considering typical ML training procedures. What is a selective pressure is regularization, which functions much more like a complexity prior than a speed prior.
So (again sticking to modern day ML as an example, if you have something else in mind that would be interesting) of course there will be a cutoff in terms of speed, excluding all algorithms that don’t fit into the neural net. But among algorithms that fit into the NN, the penalty on their speed will be entirely explainable as a consequence of regularization that e.g. favors circuits that depend on fewer parameters, and would therefore be faster after some optimization steps.
If examples of successful parameters were sparse and tended to just barely fit into the NN, then this speed cutoff will be very important. But in the present day we see that good parameters tend to be pretty thick on the ground, and you can fairly smoothly move around in parameter space to make different tradeoffs.
Here’s my stab at rephrasing this argument without reference to IB. Would appreciate corrections, and any pointers on where you think the IB formalism adds to the pre-theoretic intuitions:
At some point imitation will progress to the point where models use information about the world to infer properties of the thing they’re trying to imitate (humans) -- e.g. human brains were selected under some energy efficiency pressure, and so have certain properties. The relationship between “things humans are observed to say/respond to” to “how the world works” is extremely complex. Imitation-downstream-of-optimization is simpler. What’s more, imitation-downstream-of-optimization can be used to model (some of) the same things the brain-in-world strategy can. A speculative example: a model learns that humans use a bunch of different reasoning strategies (deductive reasoning, visual-memory search, analogizing...) and does a search over these strategies to see which one best fits the current context. This optimization-to-find-imitation is simpler than learning the evolutionary/cultural/educational world model which explains why the human uses one strategy over another in a given context.
I must be missing something here. Isn’t optimizing necessary for superhuman behavior? So isn’t “superhuman behavior” a strictly stronger requirement than “being a mesaoptimizer”? So isn’t it clear which one happens first?
Fast imitations of subhuman behavior or imitations of augmented of humans are also superhuman. As is planning against a human-level imitation. And so on.
It’s unclear if systems trained in that way will be imitating a process that optimizes, or will be optimizing in order to imitate. (Presumably they are doing both to varying degrees.) I don’t think this can be settled a priori.
This “imitating an optimizer” / “optimizing to imitate” dichotomy seems unnecessarily confusing to me. Isn’t it just inner alignment / inner misalignment (with the human behavior you’re being trained on)? If you’re imitating an optimizer, you’re still an optimizer.
I agree with this. If the key idea is, for example, optimising imitators generalise better than imitations of optimisers, or for a second example that they pursue simpler goals, it seems to me that it’d be better just to draw distinctions based on generalisation or goal simplicity and not on optimising imitators/imitations of optimisers.
Sorry, I should be more specific. We are talking about AGI Safety, it seems unlikely that running narrow AI faster gets you AGI. I’m not sure if you disagree with that. I don’t understand what you mean by “imitations of augmented of humans” and “planning against a human-level imitation”.