I don’t think there’s an agreed upon definition of hallucination, but if I had to come up with one, it’s “making inferences that aren’t supported by the prompt, when the prompt doesn’t ask for it”.
The reason why the boundary around “hallucination” is fuzzy is because language models constantly have to make inferences that aren’t “in the text” from a human perspective, a bunch of which are desirable. E.g. the language model should know facts about the world, or be able to tell realistic stories when prompted.
Thanks for clarifying!
So, in that case:
What exactly is a hallucination?
Are hallucinations sometimes desirable?
I don’t think there’s an agreed upon definition of hallucination, but if I had to come up with one, it’s “making inferences that aren’t supported by the prompt, when the prompt doesn’t ask for it”.
The reason why the boundary around “hallucination” is fuzzy is because language models constantly have to make inferences that aren’t “in the text” from a human perspective, a bunch of which are desirable. E.g. the language model should know facts about the world, or be able to tell realistic stories when prompted.