LLM hallucination is good epistemic training. When I code, I’m constantly asking Claude how things work and what things are possible. It often gets things wrong, but it’s still helpful. You just have to use it to help you build up a gears level model of the system you are working with. Then, when it confabulates some explanation you can say “wait, what?? that makes no sense” and it will say “You’re right to question these points—I wasn’t fully accurate” and give you better information.
Then it will often confabulate a reason why the correct thing it said was actually wrong. So you can never really trust it, you have to think about what makes sense and test your model against reality.
But to some extent that’s true for any source of information. LLMs are correct about a lot of things and you can usually guess which things they’re likely to get wrong.
Not OP but IME it might (1) insist that it’s right, (2) apologize, think again, generate code again, but it’s mostly the same thing (in which case it might claim it fixed something or it might not), (3) apologize, think again, generate code again, and it’s not mostly the same thing.
LLM hallucination is good epistemic training. When I code, I’m constantly asking Claude how things work and what things are possible. It often gets things wrong, but it’s still helpful. You just have to use it to help you build up a gears level model of the system you are working with. Then, when it confabulates some explanation you can say “wait, what?? that makes no sense” and it will say “You’re right to question these points—I wasn’t fully accurate” and give you better information.
What if you say that when it was fully accurate?
Then it will often confabulate a reason why the correct thing it said was actually wrong. So you can never really trust it, you have to think about what makes sense and test your model against reality.
But to some extent that’s true for any source of information. LLMs are correct about a lot of things and you can usually guess which things they’re likely to get wrong.
Not OP but IME it might (1) insist that it’s right, (2) apologize, think again, generate code again, but it’s mostly the same thing (in which case it might claim it fixed something or it might not), (3) apologize, think again, generate code again, and it’s not mostly the same thing.