Alarming that it freely “lies” (?) or hallucinates or whatever is going on, rather than replying “I don’t know”.
That’s entirely expected. Hallucilying is a typical habit of language models. They do that unless some prompt engineering have been applied.
Alarming that it freely “lies” (?) or hallucinates or whatever is going on, rather than replying “I don’t know”.
That’s entirely expected. Hallucilying is a typical habit of language models. They do that unless some prompt engineering have been applied.