Well, I’m personally going to be working on adapting the method I cited for use as a value alignment approach. I’m not exactly doing it so that we’ll have an “emergency” method on hand, more because I think it’s could be a straight up improvement over RLHF, even outside of emergency time-constrained scenarios.
However, I do think there’s a lot of value in having alignment approaches that are easy to deploy. The less technical debt and ways for things to go wrong, the better. And the simpler the approach, the more likely it is that capabilities researchers will actually use it. There is some risk that we’ll end up in a situation where capabilities researchers are choosing between a “fast, low quality” solution and a “slow, high quality” solution. In that case, the existence of the “fast, low quality” solution may cause them to avoid the better one, since they’ll have something that may seem “good enough” to them.
Probably, the most future proof way to build up readily-deployable alignment resources is to build lots of “alignment datasets” that have high-quality labeled examples of AI systems behaving in the way we want (texts of AIs following instructions, AIs acting in accordance with our values, or even just prompts / scenarios / environments where they could demonstrate value alignment). OpenAI has something like this which they used to train instructGPT.
I also proposed that we make a concerted effort to build such datasets now, especially for AIs acting in high-capabilities domains. ML methods may change in the future, but data will always be important.
Well, I’m personally going to be working on adapting the method I cited for use as a value alignment approach. I’m not exactly doing it so that we’ll have an “emergency” method on hand, more because I think it’s could be a straight up improvement over RLHF, even outside of emergency time-constrained scenarios.
However, I do think there’s a lot of value in having alignment approaches that are easy to deploy. The less technical debt and ways for things to go wrong, the better. And the simpler the approach, the more likely it is that capabilities researchers will actually use it. There is some risk that we’ll end up in a situation where capabilities researchers are choosing between a “fast, low quality” solution and a “slow, high quality” solution. In that case, the existence of the “fast, low quality” solution may cause them to avoid the better one, since they’ll have something that may seem “good enough” to them.
Probably, the most future proof way to build up readily-deployable alignment resources is to build lots of “alignment datasets” that have high-quality labeled examples of AI systems behaving in the way we want (texts of AIs following instructions, AIs acting in accordance with our values, or even just prompts / scenarios / environments where they could demonstrate value alignment). OpenAI has something like this which they used to train instructGPT.
I also proposed that we make a concerted effort to build such datasets now, especially for AIs acting in high-capabilities domains. ML methods may change in the future, but data will always be important.