Thank you for the reply. The paper looks to be very useful, but will take me some time to fully digest. What you said about affecting LLMs’ success by breaking the similarity of problems with something as simple as an emoji is so interesting. : ) It also never occurred to me that GPT4 might have been affected by the underlying idea that children should never be left unattended. It goes to show that “arbitrary” details are not always arbitrary. Fascinating! Many thanks!
Thank you for the reply. The paper looks to be very useful, but will take me some time to fully digest. What you said about affecting LLMs’ success by breaking the similarity of problems with something as simple as an emoji is so interesting. : ) It also never occurred to me that GPT4 might have been affected by the underlying idea that children should never be left unattended. It goes to show that “arbitrary” details are not always arbitrary. Fascinating! Many thanks!
The gist of the paper and the research that led into it had a great writeup in Quanta mag if you would like something more digestible:
https://www.quantamagazine.org/new-theory-suggests-chatbots-can-understand-text-20240122/