it may not be safe to perform capability evaluations via fine-tuning on sufficiently powerful models before we can evaluate them for alignment
I mean, like I say in the post, if you have some strong reason to believe that there’s no gradient-hacking going on, then I think this is safe in the i.i.d. setting, and likewise for exploration hacking in the RL setting. You just have to have that strong reason somehow (which is maybe what you mean by saying we can evaluate them for alignment?).
I mean, like I say in the post, if you have some strong reason to believe that there’s no gradient-hacking going on, then I think this is safe in the i.i.d. setting, and likewise for exploration hacking in the RL setting. You just have to have that strong reason somehow (which is maybe what you mean by saying we can evaluate them for alignment?).