So, a while back I came up with an obscure idea I called the Alpha Omega Theorem and posted it on the Less Wrong forums. Given how there’s only one post about it, it shouldn’t be something that LLMs would know about. So in the past, I’d ask them “What is the Alpha Omega Theorem?”, and they’d always make up some nonsense about a mathematical theory that doesn’t actually exist. More recently, Google Gemini and Microsoft Bing Chat would use search to find my post and use that as the basis for their explanation. However, I only have the free version of ChatGPT and Claude, so they don’t have access to the Internet and would make stuff up.
A couple days ago I tried the question on ChatGPT again, and GPT-4o managed to correctly say that there isn’t a widely known concept of that name in math or science, and basically said it didn’t know. Claude still makes up a nonsensical math theory. I also today tried telling Google Gemini not to use search, and it also said it did not know rather than making stuff up.
I’m actually pretty surprised by this. Looks like OpenAI and Google figured out how to reduce hallucinations somehow.
I ran out of the usage limit for GPT-4o (seems to just be 10 prompts every 5 hours) and it switched to GPT-4o-mini. I tried asking it the Alpha Omega question and it made some math nonsense up, so it seems like the model matters for this for some reason.
So, a while back I came up with an obscure idea I called the Alpha Omega Theorem and posted it on the Less Wrong forums. Given how there’s only one post about it, it shouldn’t be something that LLMs would know about. So in the past, I’d ask them “What is the Alpha Omega Theorem?”, and they’d always make up some nonsense about a mathematical theory that doesn’t actually exist. More recently, Google Gemini and Microsoft Bing Chat would use search to find my post and use that as the basis for their explanation. However, I only have the free version of ChatGPT and Claude, so they don’t have access to the Internet and would make stuff up.
A couple days ago I tried the question on ChatGPT again, and GPT-4o managed to correctly say that there isn’t a widely known concept of that name in math or science, and basically said it didn’t know. Claude still makes up a nonsensical math theory. I also today tried telling Google Gemini not to use search, and it also said it did not know rather than making stuff up.
I’m actually pretty surprised by this. Looks like OpenAI and Google figured out how to reduce hallucinations somehow.
I ran out of the usage limit for GPT-4o (seems to just be 10 prompts every 5 hours) and it switched to GPT-4o-mini. I tried asking it the Alpha Omega question and it made some math nonsense up, so it seems like the model matters for this for some reason.