I think the point of this post is more “how do we get the AI to do what we want it do to”, and less “what should we want the AI to do”
That is, there’s value in trying to figure out how to align an LLM to any goal, regardless of whether a “better” goal exists. And the technique in the post doesnt depend on what target you have for the LLM: maybe someone wants to design an LLM to only answer questions about explosives, in which case they could still use the techniques described in the post to do that.
(1) AI needs a large and comprehensive RL bench to train on, where we stick to tasks that have a clear right or wrong answer
(2) AI needs to output an empirical confidence as to the answer, and emit responses appropriate to it’s level of confidence. It’s empirical as in it means “if I was giving this answer on the RL test bench, this is approximately how likely it will be marked correct”.
For the chatGPT/GPT-n system, the bench could be :
(1) multiple choice tests from many high school and college courses
(2) tasks from computer programing that are measurably gradable, such as :
a. Coding problems from leetcode/code signal, where we grade the AI’s submission
b. Coding problems of the form “translate this program in language X to language Y and pass the unit test”
c. Coding problems of the form “this is a WRONG answer from a coding website (you can make a deal with LC/code signal to get these”. Write a unit test that will fail on this answer but pass on a correct answer.
d. Coding problems of the form “take this suboptimal solution and make it run faster”
e. Coding problems of the form “here’s the problem description and a submission. Will this submission work, and if not, write a unit test that will fail”
And so on. Other codebases with deep unit tests where it’s possible to usually know if the AI broke something could be used as challenges as well for a-e.
Oh and the machine needs a calculator and I guess a benchmark that is basically “kumon math”.
Main thing is that knowing if the answer is correct is different from if the answer is morally right. And simply correct is possibly easier.
I think the point of this post is more “how do we get the AI to do what we want it do to”, and less “what should we want the AI to do”
That is, there’s value in trying to figure out how to align an LLM to any goal, regardless of whether a “better” goal exists. And the technique in the post doesnt depend on what target you have for the LLM: maybe someone wants to design an LLM to only answer questions about explosives, in which case they could still use the techniques described in the post to do that.
That sounds fairly straightforward.
(1) AI needs a large and comprehensive RL bench to train on, where we stick to tasks that have a clear right or wrong answer
(2) AI needs to output an empirical confidence as to the answer, and emit responses appropriate to it’s level of confidence. It’s empirical as in it means “if I was giving this answer on the RL test bench, this is approximately how likely it will be marked correct”.
For the chatGPT/GPT-n system, the bench could be :
(1) multiple choice tests from many high school and college courses
(2) tasks from computer programing that are measurably gradable, such as :
a. Coding problems from leetcode/code signal, where we grade the AI’s submission
b. Coding problems of the form “translate this program in language X to language Y and pass the unit test”
c. Coding problems of the form “this is a WRONG answer from a coding website (you can make a deal with LC/code signal to get these”. Write a unit test that will fail on this answer but pass on a correct answer.
d. Coding problems of the form “take this suboptimal solution and make it run faster”
e. Coding problems of the form “here’s the problem description and a submission. Will this submission work, and if not, write a unit test that will fail”
And so on. Other codebases with deep unit tests where it’s possible to usually know if the AI broke something could be used as challenges as well for a-e.
Oh and the machine needs a calculator and I guess a benchmark that is basically “kumon math”.
Main thing is that knowing if the answer is correct is different from if the answer is morally right. And simply correct is possibly easier.