Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks is a paper that I recently tried to read and tried to recreate its findings and succeeded. Whether or not LLMs have TOM feels directionally unanswerable, is this a consciousness level debate?
However, I followed up by asking questions prompted by the phrase “explain Sam’s theory of mind” which got much more cohesive answers. It’s not intuitive to me yet how much order can arise from prompts. Or where the order arises from? Opaque boxes indeed.
Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks is a paper that I recently tried to read and tried to recreate its findings and succeeded. Whether or not LLMs have TOM feels directionally unanswerable, is this a consciousness level debate?
However, I followed up by asking questions prompted by the phrase “explain Sam’s theory of mind” which got much more cohesive answers. It’s not intuitive to me yet how much order can arise from prompts. Or where the order arises from? Opaque boxes indeed.