Then it will often confabulate a reason why the correct thing it said was actually wrong. So you can never really trust it, you have to think about what makes sense and test your model against reality.
But to some extent that’s true for any source of information. LLMs are correct about a lot of things and you can usually guess which things they’re likely to get wrong.
Then it will often confabulate a reason why the correct thing it said was actually wrong. So you can never really trust it, you have to think about what makes sense and test your model against reality.
But to some extent that’s true for any source of information. LLMs are correct about a lot of things and you can usually guess which things they’re likely to get wrong.