Then it will often confabulate a reason why the correct thing it said was actually wrong. So you can never really trust it, you have to think about what makes sense and test your model against reality.
But to some extent that’s true for any source of information. LLMs are correct about a lot of things and you can usually guess which things they’re likely to get wrong.
Not OP but IME it might (1) insist that it’s right, (2) apologize, think again, generate code again, but it’s mostly the same thing (in which case it might claim it fixed something or it might not), (3) apologize, think again, generate code again, and it’s not mostly the same thing.
What if you say that when it was fully accurate?
Then it will often confabulate a reason why the correct thing it said was actually wrong. So you can never really trust it, you have to think about what makes sense and test your model against reality.
But to some extent that’s true for any source of information. LLMs are correct about a lot of things and you can usually guess which things they’re likely to get wrong.
Not OP but IME it might (1) insist that it’s right, (2) apologize, think again, generate code again, but it’s mostly the same thing (in which case it might claim it fixed something or it might not), (3) apologize, think again, generate code again, and it’s not mostly the same thing.