So.… they held the door open to see if it’d escape or not? I predict this testing method may go poorly with more capable models, to put it lightly.
And then OpenAI deployed a more capable version than was tested!
They also did not have access to the final version of the model that we deployed. The final version has capability improvements relevant to some of the factors that limited the earlier models power-seeking abilities, such as longer context length, and improved problem-solving abilities as in some cases we’ve observed.
This defeats the entire point of testing.
I am slightly worried that posts like veedrac’s Optimality is the Tiger may have given them ideas. “Hey, if you run it in this specific way, a LLM might become an agent! If it gives you code for recursively calling itself, don’t run it”… so they write that code themselves and run it.
I really don’t know how to feel about this. On one hand, this is taking ideas around alignment seriously and testing for them, right? On the other hand, I wonder what the testers would have done if the answer was “yep, it’s dangerously spreading and increasing it’s capabilities oh wait no nevermind it’s stopped that and looks fine now”.
Yeah, if the RLHF is supposed to train honesty into it, maybe it would have done worse on the task rabbit task. This really seems like a PR throwaway line rather than a legit attempt to red team the final model.
So.… they held the door open to see if it’d escape or not? I predict this testing method may go poorly with more capable models, to put it lightly.
And then OpenAI deployed a more capable version than was tested!
This defeats the entire point of testing.
I am slightly worried that posts like veedrac’s Optimality is the Tiger may have given them ideas. “Hey, if you run it in this specific way, a LLM might become an agent! If it gives you code for recursively calling itself, don’t run it”… so they write that code themselves and run it.
I really don’t know how to feel about this. On one hand, this is taking ideas around alignment seriously and testing for them, right? On the other hand, I wonder what the testers would have done if the answer was “yep, it’s dangerously spreading and increasing it’s capabilities oh wait no nevermind it’s stopped that and looks fine now”.
Yeah, if the RLHF is supposed to train honesty into it, maybe it would have done worse on the task rabbit task. This really seems like a PR throwaway line rather than a legit attempt to red team the final model.