I’ve noticed this a while ago. It’s not the only thing that AIs have trouble with.
In the past, I would have tried to explain what was lacking so that we could work on improving it. Now I’m glad that they don’t know.
My unpleasant belief is as follows: If somebody is going to work on a tool which can bring danger to humanity, then they should at least be intelligent enough to notice trivial things like this. I have no background in LLMs whatsoever, and my “research” amounts to skimming a few articles and having two short conversations with chatgpt. But even I can tell what goes wrong and why, as I have thought a bit about intelligence in humans.
If you had zero competence in a field, and you saw an “expert” having trouble with something that you could help him with, you’d likely worry and question his competence as well as your own competence.
I’ve noticed this a while ago. It’s not the only thing that AIs have trouble with.
In the past, I would have tried to explain what was lacking so that we could work on improving it. Now I’m glad that they don’t know.
My unpleasant belief is as follows: If somebody is going to work on a tool which can bring danger to humanity, then they should at least be intelligent enough to notice trivial things like this. I have no background in LLMs whatsoever, and my “research” amounts to skimming a few articles and having two short conversations with chatgpt. But even I can tell what goes wrong and why, as I have thought a bit about intelligence in humans.
If you had zero competence in a field, and you saw an “expert” having trouble with something that you could help him with, you’d likely worry and question his competence as well as your own competence.