Thanks for the post and writeup, and good work! I especially appreciate the short, informal explanation of what makes this work.
Given my current understanding of the proposal, I have one worry which makes me reluctant to share your optimism about this being a solution to inner alignment:
The scheme doesn’t protect us if somehow all top-n demonstrator models have correlated errors. This could happen if they are coordinating, or more prosaically if our way to approximate the posterior leads to such correlations. The picture I have in my head for the latter is that we train a big ensemble of neural nets and treat a random sample from that ensemble as a random sample from the posterior, although I don’t know if that’s how it’s actually done.
A lot of the work is done by the assumption that the true demonstrator is in the posterior, which means that at least one of the top-performing models will not have the same correlated errors. But I’m not sure how true this assumption will be in the neural-net approximation I describe above. I worry about inner alignment failures because I don’t really trust the neural net prior, and I can imagine training a bunch of neural nets to have correlated weirdnesses about them (in part because of the neural net prior they share, and in part because of things like Adversarial Examples Are Not Bugs, They Are Features). As such it wouldn’t be that surprising to me if it turned out that ensembles have certain correlated errors, and in particular don’t really represent anything like the demonstrator.
I do feel safer using this method than I would deferring to a single model, so this is still a good idea on balance. I just am not convinced that it solves the inner alignment problem. Instead, I’d say it ameliorates its severity, which may or may not be sufficient.
Yes, I agree that an ensemble of models generated by a neural network may have correlated errors. I only claim to solve the inner alignment problem in theory (i.e. for idealized Bayesian reasoning).
Thanks for the post and writeup, and good work! I especially appreciate the short, informal explanation of what makes this work.
Given my current understanding of the proposal, I have one worry which makes me reluctant to share your optimism about this being a solution to inner alignment:
The scheme doesn’t protect us if somehow all top-n demonstrator models have correlated errors. This could happen if they are coordinating, or more prosaically if our way to approximate the posterior leads to such correlations. The picture I have in my head for the latter is that we train a big ensemble of neural nets and treat a random sample from that ensemble as a random sample from the posterior, although I don’t know if that’s how it’s actually done.
A lot of the work is done by the assumption that the true demonstrator is in the posterior, which means that at least one of the top-performing models will not have the same correlated errors. But I’m not sure how true this assumption will be in the neural-net approximation I describe above. I worry about inner alignment failures because I don’t really trust the neural net prior, and I can imagine training a bunch of neural nets to have correlated weirdnesses about them (in part because of the neural net prior they share, and in part because of things like Adversarial Examples Are Not Bugs, They Are Features). As such it wouldn’t be that surprising to me if it turned out that ensembles have certain correlated errors, and in particular don’t really represent anything like the demonstrator.
I do feel safer using this method than I would deferring to a single model, so this is still a good idea on balance. I just am not convinced that it solves the inner alignment problem. Instead, I’d say it ameliorates its severity, which may or may not be sufficient.
Yes, I agree that an ensemble of models generated by a neural network may have correlated errors. I only claim to solve the inner alignment problem in theory (i.e. for idealized Bayesian reasoning).