If you ask the model, “does the following text contain made up facts you cannot locate on Bing”, can it then check if Bing has cites for each quote?
It looks like it will work. This is counterintuitive but it’s because the model never did any of this “introspection” when it generated the string. It just rattled off whatever it predicted was next, within the particular region of multidimensional knowledge space you are in.
You could automate this. Have the model generate possible answers to a query. Have other instances of the same model be prompted to search for common classes of errors, and respond in language that can be scored.
Then RL on the answers that are the least wrong, or negative feedback on the answer that most disagrees with the introspection. This “shapes” the multidimensional space of the model to be more likely to produce correct answers, and to not give made up facts.
Are you sure introspection won’t work?
If you ask the model, “does the following text contain made up facts you cannot locate on Bing”, can it then check if Bing has cites for each quote?
It looks like it will work. This is counterintuitive but it’s because the model never did any of this “introspection” when it generated the string. It just rattled off whatever it predicted was next, within the particular region of multidimensional knowledge space you are in.
You could automate this. Have the model generate possible answers to a query. Have other instances of the same model be prompted to search for common classes of errors, and respond in language that can be scored.
Then RL on the answers that are the least wrong, or negative feedback on the answer that most disagrees with the introspection. This “shapes” the multidimensional space of the model to be more likely to produce correct answers, and to not give made up facts.