This. Asking GPT-4 a question might give an obviously wrong answer, but sometimes, just following up with “That answer contains an obvious error. Please correct it.” (without saying what the error was) results in a much better answer. GPT-4 is not a person in the sense that each internet user is.
Well, that’s true. People do also do that. I was trying to point to the idea of LLMs being able to act like multiple different people when properly prompted to do so.
This. Asking GPT-4 a question might give an obviously wrong answer, but sometimes, just following up with “That answer contains an obvious error. Please correct it.” (without saying what the error was) results in a much better answer. GPT-4 is not a person in the sense that each internet user is.
How does that argument go? The same is true of a person doing (say) the cognitive reflection task.
“A bat and a ball together cost $1.10; the bat costs $1 more than the ball; how much does the ball cost?”
Standard answer: “$0.10”. But also standardly, if you say “That’s not correct”, the person will quickly realize their mistake.
Well, that’s true. People do also do that. I was trying to point to the idea of LLMs being able to act like multiple different people when properly prompted to do so.