I don’t think that’s right—if you’re modeling the training process as Bayesian, as I now understand, then the issue is that what makes the problem go away isn’t more intelligent models, but less efficient training processes. Even if we have arbitrary compute, I think we’re unlikely to use training processes that look Bayesian just because true Bayesianism is really hard to compute such that for basically any amount of computation that you have available to you, you’d rather run some more efficient algorithm like SGD.
I feel this is a wrong way to look at it. I expect any effective learning algorithm to be an approximation of Bayesianism in the sense that, it satisfies some good sample complexity bound w.r.t. some sufficiently rich prior. Ofc it’s non-trivial to (i) prove such a bound for a given algorithm (ii) modify said algorithm using confidence thresholds in a way that leads to a safety guarantee. However, there is no sharp dichotomy between “Bayesianism” and “SGD” such that this approach obviously doesn’t apply to the latter, or to something competitive with the latter.
I agree that at some level SGD has to be doing something approximately Bayesian. But that doesn’t necessarily imply that you’ll be able to get any nice, Bayesian-like properties from it such as error bounds. For example, if you think of SGD as effectively just taking the MAP model starting from sort of simplicity prior, it seems very difficult to turn that into something like the top N posterior models, as would be required for an algorithm like this.
I mean, there’s obviously a lot more work to do, but this is progress. Specifically if SGD is MAP then it seems plausible that e.g. SGD + random initial conditions or simulated annealing would give you something like top N posterior models. You can also extract confidence from NNGP.
I agree that this is progress (now that I understand it better), though:
if SGD is MAP then it seems plausible that e.g. SGD + random initial conditions or simulated annealing would give you something like top N posterior models
I think there is strong evidence that the behavior of models trained via the same basic training process are likely to be highly correlated. This sort of correlation is related to low variance in the bias-variance tradeoff sense, and there is evidence that not only do massive neural networks tend to have pretty low variance, but that this variance is likely to continue to decrease as networks become larger.
Here’s another way how you can try implementing this approach with deep learning. Train the predictor using meta-learning on synthetically generated environments (sampled from some reasonable prior such as bounded Solomonoff or maybe ANNs with random weights). The reward for making a prediction is 1−(1−δ)maxipi+ϵqi+ϵ, where pi is the predicted probability of outcome i, qi is the true probability of outcome i and ϵ,δ∈(0,1) are parameters. The reward for making no prediction (i.e. querying the user) is 0.
This particular proposal is probably not quite right, but something in that general direction might work.
Sure, but you have no guarantee that the model you learn is actually going to be optimizing anything like that reward function—that’s the whole point of the inner alignment problem. What’s nice about the approach in the original paper is that it keeps a bunch of different models around, keeps track of their posterior, and only acts on consensus, ensuring that the true model always has to approve. But if you just train a single model on some reward function like that with deep learning, you get no such guarantees.
Right, but but you can look at the performance of your model in training, compare it to the theoretical optimum (and to the baseline of making no predictions at all) and get lots of evidence about safety from that. You can even add some adversarial training of the synthetic environment in order to get tighter bounds. If on the vast majority of synthetic environments your model makes virtually no mispredictions, then, under the realizability assumption, it is very unlikely to make mispredictions in deployment. Ofc the realizability assumption should also be questioned: but that’s true in the OP as well, so it’s not a difference between Bayesianism and deep.
Right, but but you can look at the performance of your model in training, compare it to the theoretical optimum (and to the baseline of making no predictions at all) and get lots of evidence about safety from that. You can even add some adversarial training of the synthetic environment in order to get tighter bounds.
Note that adversarial training doesn’t work on deceptive models due to the RSA-2048 problem; also see more detail here.
If on the vast majority of synthetic environments your model makes virtually no mispredictions, then, under the realizability assumption, it is very unlikely to make mispredictions in deployment.
I think realizability basically doesn’t help here—as long as there’s a deceptive model which is easier to find according to your inductive biases, the fact that somewhere in the model space there exists a correct model, but not one that your local search process finds by default, is cold comfort.
I think that adversarial training working so well that it can find exponentially rare failures is an unnecessarily strong desideratum. We need to drive to probability of catastrophic failures to something very low, but not actually zero. If a system is going to work for time t during deployment, then running it in n different synthetic environments for time t during training is enough to drive the probability of failure down to O(1n). Now, this is prohibitively costly, but it’s not exponentially prohibitively costly. And this doesn’t use adversarial training at all! So, while it’s not obvious how to get a good solution, it seems at least plausible that this can lead to a solution.
I agree that random defection can potentially be worked around—but the RSA-2048 problem is about conditional defection, which can’t be dealt with in the same way. More generally, I expect it to be extremely difficult if not impossible to prevent a model that you want to be able to operate in the real world from being able to determine at what point in training/deployment it’s in.
I expect it to be extremely difficult if not impossible to prevent a model that you want to be able to operate in the real world from being able to determine at what point in training/deployment it’s in.
Why do you expect it? During training, it finds itself in random universe. During deployment, it finds itself in another universe drawn from the same prior (the realizability assumption). How would it determine the difference?
Because that’s never what machine learning actually looks like in practice—essentially any access to text from the internet (let alone actual ability to interface with the world, both of which seem necessary for competitive systems) will let it determine things like the year, whether RSA-2048 has been factored, or other various pieces of information that are relevant to what stage in training/testing/deployment it’s in, how powerful the feedback mechanisms keeping it in check are, whether other models are defecting, etc. that can let it figure out when to defect.
Hmm, maybe you misunderstood my proposal? I suggested to train the model by meta-learning on purely synthetic data, sampled from some kind of simplicity prior, without any interface to the world. Maybe you just think this wouldn’t be competitive? If so, why? Is the argument just that there are no existing systems like this? But then it’s weak evidence at best. On the contrary, even from a purely capability standpoint, meta-learning with synthetic data might be a promising strategy to lower deep learning’s sample complexity.
Here’s why I think it will be competitive:
When a hypothetical competitive prediction algorithm A produces a malign prediction, the knowledge that it is malign clearly exists somewhere inside A in some form: otherwise it wouldn’t come up with the prediction. In particular, the knowledge that there are multiple plausible hypotheses consistent with the data also exists somewhere inside A.
Therefore, there must exist some competitive algorithm B that would be able to use the knowledge of this ambiguity to abstain from predicting in such cases.
There is no reason why B should be tailored to fine details of our physical world: it can be quite generic (as all deep learning algorithms).
If we have an ML algorithm C that is powerful enough to produce superintelligence, then it is likely powerful enough to come up with B. Since B is generic, the task of finding B doesn’t require any real-world data, and can be accomplished by meta-learning on synthetic data like I suggested.
This seems very sketchy to me. If we let A = SGD or A = evolution, your first claim becomes “if SGD/evolution finds a malign model, it must understand that it’s malign on some level,” which seems just straightforwardly incorrect. The last claim also seems pretty likely to be wrong if you let C = SGD or C = evolution.
Moreover, it definitely seems like training on data sampled from a simplicity prior (if that’s even possible—it should be uncomputable in general) is unlikely to help at all. I think there’s essentially no realistic way that training on synthetic data like that will be sufficient to produce a model which is capable of accomplishing things in the world. At best, that sort of approach might give you better inductive biases in terms of incentivizing the right sort of behavior, but in practice I expect any impact there to basically just be swamped by the impact of fine-tuning on real-world data.
If we let A = SGD or A = evolution, your first claim becomes “if SGD/evolution finds a malign model, it must understand that it’s malign on some level,” which seems just straightforwardly incorrect.
To me it seems straightforwardly correct! Suppose you’re running evolution in order to predict a sequence. You end up evolving a mind M that is a superintelligent malign consequentialist: it makes good predictions on purpose, in order to survive, and then produces a malign false prediction at a critical moment. So, M is part of the state of your algorithm A. All of M’s knowledge is also part of the state. M knows that M is malign, M knows the prediction it’s making this time is false. In this case, B can be whatever algorithm M uses to form its beliefs + confidence threshold (it’s strategy stealing.)
The last claim also seems pretty likely to be wrong if you let C = SGD or C = evolution.
Why? If you give evolution enough time, and the fitness criterion is good (as you apparently agreed earlier), then eventually it will find B.
Moreover, it definitely seems like training on data sampled from a simplicity prior (if that’s even possible—it should be uncomputable in general) is unlikely to help at all.
First, obviously we use a bounded simplicity prior, like I said in the beginning of this thread. It can be something like weighting programs by 2^{-length} while constraining their amount of computational resources, or something like an ANN with random weights (the latter is more speculative, but given that we know ANNs have inductive bias to simplicity, an untrained ANN probably reflects that.)
Second, why? Suppose that your starting algorithm is good at finding results when a lot of data is available, but is also very data inefficient (like deep learning seems to be). Then, by providing it with a lot of synthetic data, you leverage its strength to find a new algorithm which is data efficient. Unless you believe deep learning is already maximally data efficient (which seems very dubious to me)?
B can be whatever algorithm M uses to form its beliefs + confidence threshold (it’s strategy stealing.)
Sure, but then I think B is likely to be significantly more complex and harder for a local search process to find than A.
Why? If you give evolution enough time, and the fitness criterion is good (as you apparently agreed earlier), then eventually it will find B.
I definitely don’t think this, unless you have a very strong (and likely unachievable imo) definition of “good.”
Second, why? Suppose that your starting algorithm is good at finding results when a lot of data is available, but is also very data inefficient (like deep learning seems to be). Then, by providing it with a lot of synthetic data, you leverage its strength to find a new algorithm which is data efficient. Unless you believe deep learning is already maximally data efficient (which seems very dubious to me)?
I guess I’m skeptical that you can do all that much in the fully generic setting of just trying to predict a simplicity prior. For example, if we look at actually successful current architectures, like CNNs or transformers, they’re designed to work well on specific types of data and relationships that are common in our world—but not necessarily at all just in a general simplicity prior.
Sure, but then I think B is likely to be significantly more complex and harder for a local search process to find than A.
A is sufficiently powerful to select M which contains the complex part of B. It seems rather implausible that an algorithm of the same power cannot select B.
if we look at actually successful current architectures, like CNNs or transformers, they’re designed to work well on specific types of data and relationships that are common in our world—but not necessarily at all just in a general simplicity prior.
CNNs are specific in some way, but in a relatively weak way (exploiting hierarchical geometric structure). Transformers are known to be Turing-complete, so I’m not sure they are specific at all (ofc the fact you can express any program doesn’t mean you can effectively learn any program, and moreover the latter is false on computational complexity grounds, but it still seems to point to some rather simple and large class of learnable hypotheses). Moreover, even if our world has some specific property that is important for learning, this only means we may need to enforce this property in our prior. For example, if your prior is an ANN with random weights then it’s plausible that it reflects the exact inductive biases of the given architecture.
A is sufficiently powerful to select M which contains the complex part of B. It seems rather implausible that an algorithm of the same power cannot select B.
A’s weights do not contain the complex part of B—deception is an inference-time phenomenon. It’s very possible for complex instrumental goals to be derived from a simple structure such that a search process is capable of finding that simple structure that yields those complex instrumental goals without being able to find a model with those complex instrumental goals hard-coded as terminal goals.
Hmm, sorry, I’m not following. What exactly do you mean by “inference-time” and “derived”? By assumption, when you run A on some sequence it effectively simulates M which runs the complex core of B. So, A trained on that sequence effectively contains the complex core of B as a subroutine.
A’s structure can just be “think up good strategies for achieving X, then do those,” with no explicit subroutine that you can find anywhere in A’s weights that you can copy over to B.
IIUC, you’re saying something like: suppose trained-A computes the source code of the complex core of B and then runs it. But then, define B′ as: compute the source code of the complex core of B (in the same way A does it) and use it to implement B. B′ is equivalent to B and has about the same complexity as trained-A.
Or, from a slightly different angle: if “think up good strategies for achieving X” is powerful enough to come up with M, then “think up good strategies for achieving [reward of the type I defined earlier]” is powerful enough to come up with B.
I think that “think up good strategies for achieving [reward of the type I defined earlier]” is likely to be much, much more complex (making it much more difficult to achieve with a local search process) than an arbitrary goal X for most sorts of rewards that we would actually be happy with AIs achieving.
Why? It seems like M would have all the knowledge required for achieving good rewards of the good type, so simulating M should not be more difficult than achieving good rewards of the good type.
The reward function I was sketching earlier is not complex. Moreover, if you can encode M then you can encode B, no reason why the latter should have much greater complexity. If you can’t encode M but can only encode something that producesM, then by the same token you can encode something that produces B (I don’t think that’s even a meaningful distinction tbh). I think it would help if you could construct a simple mathematical toy model of your reasoning here?
Here’s a simple toy model. Suppose you have two agents that internally compute their actions as follows (perhaps with actual argmax replaced with some smarter search algorithm, but still basically structured as below):
and the problem becomes that both Mdeceptive and Maligned will produce behavior that looks aligned on the training distribution, but Ualigned has to be much more complex. To see this, note that essentially any Udeceptive will yield good training performance because the model will choose to act deceptively during training, whereas if you want to get good training performance without deception, then Ualigned has to actually encode the full objective, which is likely to make it quite complicated.
I understand how this model explains why agents become unaligned under distributional shift. That’s something I never disputed. However, I don’t understand how this model applies to my proposal. In my proposal, there is no distributional shift, because (thanks to the realizability assumption) the real environment is sampled from the same prior that is used for training with C. The model can’t choose to act deceptively during training, because it can’t distinguish between training and deployment. Moreover, the objective I described is not complicated.
Yeah, that’s a fair objection—my response to that is just that I think that preventing a model from being able to distinguish training and deployment is likely to be impossible for anything competitive.
Okay, but why? I think that the reason you have this intuition is, the realizability assumption is false. But then you should concede that the main weakness of the OP is the realizability assumption rather than the difference between deep learning and Bayesianism.
Perhaps I just totally don’t understand what you mean by realizability, but I fail to see how realizability is relevant here. As I understand it, realizability just says that the true model has some non-zero prior probability—but that doesn’t matter (at least for the MAP, which I think is a better model than the full posterior for how SGD actually works) as long as there’s some deceptive model with greater prior probability that’s indistinguishable on the training distribution, as in my simple toy model from earlier.
When talking about uniform (worst-case) bounds, realizability just means the true environment is in the hypothesis class, but in a Bayesian setting (like in the OP) it means that our bounds scale with the probability of the true environment in the prior. Essentially, it means we can pretend the true environment was sampled from the prior. So, if (by design) training works by sampling environments from the prior, and (by realizability) deployment also consists of sampling an environment from the same prior, training and deployment are indistinguishable.
Sure—by that definition of realizability, I agree that’s where the difficulty is. Though I would seriously question the practical applicability of such an assumption.
We’re doing meta-learning. During training, the network is not learning about the real world, it’s learning how to be a safe predictor. It’s interacting with a synthetic environment, so a misprediction doesn’t have any catastrophic effects: it only teaches the algorithm that this version of the predictor is unsafe. In other words, the malign subagents have no way to attack during training because they can access little information about what the real universe is like. The training process is designed to select predictors that only make predictions when they can be confident, and the training performance allows us to verify this goal has truly been achieved.
You have no guarantees, sure, but that’s a problem with deep learning in general and not just inner alignment. The point is, if your model is not optimizing that reward function then its performance during training will be suboptimal. To the extent your algorithm is able to approximate the true optimum during training, it will behave safely during deployment.
that’s a problem with deep learning in general and not just inner alignment
I think you are understanding inner alignment very differently than we define it in Risks from Learned Optimization, where we introduced the term.
The point is, if your model is not optimizing that reward function then its performance during training will be suboptimal.
This is not true for deceptively aligned models, which is the situation I’m most concerned about, and—as we argue extensively in Risks from Learned Optimization—there are a lot of reasons why a model might end up pursuing a simpler/faster/easier-to-find proxy even if that proxy yields suboptimal training performance.
I feel this is a wrong way to look at it. I expect any effective learning algorithm to be an approximation of Bayesianism in the sense that, it satisfies some good sample complexity bound w.r.t. some sufficiently rich prior. Ofc it’s non-trivial to (i) prove such a bound for a given algorithm (ii) modify said algorithm using confidence thresholds in a way that leads to a safety guarantee. However, there is no sharp dichotomy between “Bayesianism” and “SGD” such that this approach obviously doesn’t apply to the latter, or to something competitive with the latter.
I agree that at some level SGD has to be doing something approximately Bayesian. But that doesn’t necessarily imply that you’ll be able to get any nice, Bayesian-like properties from it such as error bounds. For example, if you think of SGD as effectively just taking the MAP model starting from sort of simplicity prior, it seems very difficult to turn that into something like the top N posterior models, as would be required for an algorithm like this.
I mean, there’s obviously a lot more work to do, but this is progress. Specifically if SGD is MAP then it seems plausible that e.g. SGD + random initial conditions or simulated annealing would give you something like top N posterior models. You can also extract confidence from NNGP.
I agree that this is progress (now that I understand it better), though:
I think there is strong evidence that the behavior of models trained via the same basic training process are likely to be highly correlated. This sort of correlation is related to low variance in the bias-variance tradeoff sense, and there is evidence that not only do massive neural networks tend to have pretty low variance, but that this variance is likely to continue to decrease as networks become larger.
Hmm, added to reading list, thank you.
Here’s another way how you can try implementing this approach with deep learning. Train the predictor using meta-learning on synthetically generated environments (sampled from some reasonable prior such as bounded Solomonoff or maybe ANNs with random weights). The reward for making a prediction is 1−(1−δ)maxipi+ϵqi+ϵ, where pi is the predicted probability of outcome i, qi is the true probability of outcome i and ϵ,δ∈(0,1) are parameters. The reward for making no prediction (i.e. querying the user) is 0.
This particular proposal is probably not quite right, but something in that general direction might work.
Sure, but you have no guarantee that the model you learn is actually going to be optimizing anything like that reward function—that’s the whole point of the inner alignment problem. What’s nice about the approach in the original paper is that it keeps a bunch of different models around, keeps track of their posterior, and only acts on consensus, ensuring that the true model always has to approve. But if you just train a single model on some reward function like that with deep learning, you get no such guarantees.
Right, but but you can look at the performance of your model in training, compare it to the theoretical optimum (and to the baseline of making no predictions at all) and get lots of evidence about safety from that. You can even add some adversarial training of the synthetic environment in order to get tighter bounds. If on the vast majority of synthetic environments your model makes virtually no mispredictions, then, under the realizability assumption, it is very unlikely to make mispredictions in deployment. Ofc the realizability assumption should also be questioned: but that’s true in the OP as well, so it’s not a difference between Bayesianism and deep.
Note that adversarial training doesn’t work on deceptive models due to the RSA-2048 problem; also see more detail here.
I think realizability basically doesn’t help here—as long as there’s a deceptive model which is easier to find according to your inductive biases, the fact that somewhere in the model space there exists a correct model, but not one that your local search process finds by default, is cold comfort.
I think that adversarial training working so well that it can find exponentially rare failures is an unnecessarily strong desideratum. We need to drive to probability of catastrophic failures to something very low, but not actually zero. If a system is going to work for time t during deployment, then running it in n different synthetic environments for time t during training is enough to drive the probability of failure down to O(1n). Now, this is prohibitively costly, but it’s not exponentially prohibitively costly. And this doesn’t use adversarial training at all! So, while it’s not obvious how to get a good solution, it seems at least plausible that this can lead to a solution.
I agree that random defection can potentially be worked around—but the RSA-2048 problem is about conditional defection, which can’t be dealt with in the same way. More generally, I expect it to be extremely difficult if not impossible to prevent a model that you want to be able to operate in the real world from being able to determine at what point in training/deployment it’s in.
Why do you expect it? During training, it finds itself in random universe. During deployment, it finds itself in another universe drawn from the same prior (the realizability assumption). How would it determine the difference?
Because that’s never what machine learning actually looks like in practice—essentially any access to text from the internet (let alone actual ability to interface with the world, both of which seem necessary for competitive systems) will let it determine things like the year, whether RSA-2048 has been factored, or other various pieces of information that are relevant to what stage in training/testing/deployment it’s in, how powerful the feedback mechanisms keeping it in check are, whether other models are defecting, etc. that can let it figure out when to defect.
Hmm, maybe you misunderstood my proposal? I suggested to train the model by meta-learning on purely synthetic data, sampled from some kind of simplicity prior, without any interface to the world. Maybe you just think this wouldn’t be competitive? If so, why? Is the argument just that there are no existing systems like this? But then it’s weak evidence at best. On the contrary, even from a purely capability standpoint, meta-learning with synthetic data might be a promising strategy to lower deep learning’s sample complexity.
Here’s why I think it will be competitive:
When a hypothetical competitive prediction algorithm A produces a malign prediction, the knowledge that it is malign clearly exists somewhere inside A in some form: otherwise it wouldn’t come up with the prediction. In particular, the knowledge that there are multiple plausible hypotheses consistent with the data also exists somewhere inside A.
Therefore, there must exist some competitive algorithm B that would be able to use the knowledge of this ambiguity to abstain from predicting in such cases.
There is no reason why B should be tailored to fine details of our physical world: it can be quite generic (as all deep learning algorithms).
If we have an ML algorithm C that is powerful enough to produce superintelligence, then it is likely powerful enough to come up with B. Since B is generic, the task of finding B doesn’t require any real-world data, and can be accomplished by meta-learning on synthetic data like I suggested.
This seems very sketchy to me. If we let A = SGD or A = evolution, your first claim becomes “if SGD/evolution finds a malign model, it must understand that it’s malign on some level,” which seems just straightforwardly incorrect. The last claim also seems pretty likely to be wrong if you let C = SGD or C = evolution.
Moreover, it definitely seems like training on data sampled from a simplicity prior (if that’s even possible—it should be uncomputable in general) is unlikely to help at all. I think there’s essentially no realistic way that training on synthetic data like that will be sufficient to produce a model which is capable of accomplishing things in the world. At best, that sort of approach might give you better inductive biases in terms of incentivizing the right sort of behavior, but in practice I expect any impact there to basically just be swamped by the impact of fine-tuning on real-world data.
To me it seems straightforwardly correct! Suppose you’re running evolution in order to predict a sequence. You end up evolving a mind M that is a superintelligent malign consequentialist: it makes good predictions on purpose, in order to survive, and then produces a malign false prediction at a critical moment. So, M is part of the state of your algorithm A. All of M’s knowledge is also part of the state. M knows that M is malign, M knows the prediction it’s making this time is false. In this case, B can be whatever algorithm M uses to form its beliefs + confidence threshold (it’s strategy stealing.)
Why? If you give evolution enough time, and the fitness criterion is good (as you apparently agreed earlier), then eventually it will find B.
First, obviously we use a bounded simplicity prior, like I said in the beginning of this thread. It can be something like weighting programs by 2^{-length} while constraining their amount of computational resources, or something like an ANN with random weights (the latter is more speculative, but given that we know ANNs have inductive bias to simplicity, an untrained ANN probably reflects that.)
Second, why? Suppose that your starting algorithm is good at finding results when a lot of data is available, but is also very data inefficient (like deep learning seems to be). Then, by providing it with a lot of synthetic data, you leverage its strength to find a new algorithm which is data efficient. Unless you believe deep learning is already maximally data efficient (which seems very dubious to me)?
Sure, but then I think B is likely to be significantly more complex and harder for a local search process to find than A.
I definitely don’t think this, unless you have a very strong (and likely unachievable imo) definition of “good.”
I guess I’m skeptical that you can do all that much in the fully generic setting of just trying to predict a simplicity prior. For example, if we look at actually successful current architectures, like CNNs or transformers, they’re designed to work well on specific types of data and relationships that are common in our world—but not necessarily at all just in a general simplicity prior.
A is sufficiently powerful to select M which contains the complex part of B. It seems rather implausible that an algorithm of the same power cannot select B.
CNNs are specific in some way, but in a relatively weak way (exploiting hierarchical geometric structure). Transformers are known to be Turing-complete, so I’m not sure they are specific at all (ofc the fact you can express any program doesn’t mean you can effectively learn any program, and moreover the latter is false on computational complexity grounds, but it still seems to point to some rather simple and large class of learnable hypotheses). Moreover, even if our world has some specific property that is important for learning, this only means we may need to enforce this property in our prior. For example, if your prior is an ANN with random weights then it’s plausible that it reflects the exact inductive biases of the given architecture.
A’s weights do not contain the complex part of B—deception is an inference-time phenomenon. It’s very possible for complex instrumental goals to be derived from a simple structure such that a search process is capable of finding that simple structure that yields those complex instrumental goals without being able to find a model with those complex instrumental goals hard-coded as terminal goals.
Hmm, sorry, I’m not following. What exactly do you mean by “inference-time” and “derived”? By assumption, when you run A on some sequence it effectively simulates M which runs the complex core of B. So, A trained on that sequence effectively contains the complex core of B as a subroutine.
A’s structure can just be “think up good strategies for achieving X, then do those,” with no explicit subroutine that you can find anywhere in A’s weights that you can copy over to B.
IIUC, you’re saying something like: suppose trained-A computes the source code of the complex core of B and then runs it. But then, define B′ as: compute the source code of the complex core of B (in the same way A does it) and use it to implement B. B′ is equivalent to B and has about the same complexity as trained-A.
Or, from a slightly different angle: if “think up good strategies for achieving X” is powerful enough to come up with M, then “think up good strategies for achieving [reward of the type I defined earlier]” is powerful enough to come up with B.
I think that “think up good strategies for achieving [reward of the type I defined earlier]” is likely to be much, much more complex (making it much more difficult to achieve with a local search process) than an arbitrary goal X for most sorts of rewards that we would actually be happy with AIs achieving.
Why? It seems like M would have all the knowledge required for achieving good rewards of the good type, so simulating M should not be more difficult than achieving good rewards of the good type.
It’s not that simulating M is difficult, but that encoding for some complex goal is difficult, whereas encoding for a random, simple goal is easy.
The reward function I was sketching earlier is not complex. Moreover, if you can encode M then you can encode B, no reason why the latter should have much greater complexity. If you can’t encode M but can only encode something that produces M, then by the same token you can encode something that produces B (I don’t think that’s even a meaningful distinction tbh). I think it would help if you could construct a simple mathematical toy model of your reasoning here?
Here’s a simple toy model. Suppose you have two agents that internally compute their actions as follows (perhaps with actual argmax replaced with some smarter search algorithm, but still basically structured as below):
Mdeceptive(x)=argmaxaE[∑iUdeceptive(si) | a]Maligned(x)=argmaxaE[∑iUaligned(si) | a]
Then, comparing the K-complexity of the two models, we get
K(Maligned)−K(Mdeceptive)≈K(Ualigned)−K(Udeceptive)
and the problem becomes that both Mdeceptive and Maligned will produce behavior that looks aligned on the training distribution, but Ualigned has to be much more complex. To see this, note that essentially any Udeceptive will yield good training performance because the model will choose to act deceptively during training, whereas if you want to get good training performance without deception, then Ualigned has to actually encode the full objective, which is likely to make it quite complicated.
I understand how this model explains why agents become unaligned under distributional shift. That’s something I never disputed. However, I don’t understand how this model applies to my proposal. In my proposal, there is no distributional shift, because (thanks to the realizability assumption) the real environment is sampled from the same prior that is used for training with C. The model can’t choose to act deceptively during training, because it can’t distinguish between training and deployment. Moreover, the objective I described is not complicated.
Yeah, that’s a fair objection—my response to that is just that I think that preventing a model from being able to distinguish training and deployment is likely to be impossible for anything competitive.
Okay, but why? I think that the reason you have this intuition is, the realizability assumption is false. But then you should concede that the main weakness of the OP is the realizability assumption rather than the difference between deep learning and Bayesianism.
Perhaps I just totally don’t understand what you mean by realizability, but I fail to see how realizability is relevant here. As I understand it, realizability just says that the true model has some non-zero prior probability—but that doesn’t matter (at least for the MAP, which I think is a better model than the full posterior for how SGD actually works) as long as there’s some deceptive model with greater prior probability that’s indistinguishable on the training distribution, as in my simple toy model from earlier.
When talking about uniform (worst-case) bounds, realizability just means the true environment is in the hypothesis class, but in a Bayesian setting (like in the OP) it means that our bounds scale with the probability of the true environment in the prior. Essentially, it means we can pretend the true environment was sampled from the prior. So, if (by design) training works by sampling environments from the prior, and (by realizability) deployment also consists of sampling an environment from the same prior, training and deployment are indistinguishable.
Sure—by that definition of realizability, I agree that’s where the difficulty is. Though I would seriously question the practical applicability of such an assumption.
What’s the distinction between training and deployment when the model can always query for more data?
We’re doing meta-learning. During training, the network is not learning about the real world, it’s learning how to be a safe predictor. It’s interacting with a synthetic environment, so a misprediction doesn’t have any catastrophic effects: it only teaches the algorithm that this version of the predictor is unsafe. In other words, the malign subagents have no way to attack during training because they can access little information about what the real universe is like. The training process is designed to select predictors that only make predictions when they can be confident, and the training performance allows us to verify this goal has truly been achieved.
You have no guarantees, sure, but that’s a problem with deep learning in general and not just inner alignment. The point is, if your model is not optimizing that reward function then its performance during training will be suboptimal. To the extent your algorithm is able to approximate the true optimum during training, it will behave safely during deployment.
I think you are understanding inner alignment very differently than we define it in Risks from Learned Optimization, where we introduced the term.
This is not true for deceptively aligned models, which is the situation I’m most concerned about, and—as we argue extensively in Risks from Learned Optimization—there are a lot of reasons why a model might end up pursuing a simpler/faster/easier-to-find proxy even if that proxy yields suboptimal training performance.
It may be helpful to point to specific sections of such a long paper.
(Also, I agree that a neural network trained trained with that reward could produce a deceptive model that makes a well-timed error.)