It seems LLMs are less likely to hallucinate answers if you end each question with ‘If you don’t know, say “I don’t know”’.
They still hallucinate a bit, but less. Given how easy it is I’m surprised openAI and Microsoft don’t already do that.
Has its own failure modes. What does it even mean not to know something? It is just yet another category of possible answers.
Still a nice prompt. Also works on humans.
It seems LLMs are less likely to hallucinate answers if you end each question with ‘If you don’t know, say “I don’t know”’.
They still hallucinate a bit, but less. Given how easy it is I’m surprised openAI and Microsoft don’t already do that.
Has its own failure modes. What does it even mean not to know something? It is just yet another category of possible answers.
Still a nice prompt. Also works on humans.