That’s entirely expected. Hallucilying is a typical habit of language models. They do that unless some prompt engineering have been applied.
Current theme: default
Less Wrong (text)
Less Wrong (link)
That’s entirely expected. Hallucilying is a typical habit of language models. They do that unless some prompt engineering have been applied.