That’s entirely expected. Hallucilying is a typical habit of language models. They do that unless some prompt engineering have been applied.
That’s entirely expected. Hallucilying is a typical habit of language models. They do that unless some prompt engineering have been applied.