I get a different justification for the incorrect answer from ChatGPT-3.5. If I precede the question with “optimize for mathematical precision”, I get the right answer. ChatGPT-4 gets it right the first time, for me. Even if I ask it “explain why 2023 is a prime number”, it says it’s not prime.
This seems fairly typical of how ChatGPT does math, to me.
-come up with answer -use “motivated reasoning” to try and justify it, even if it results in a contradiction -ignore the contradiction, no matter how obvious it is
I know ChatGPT isn’t great with math, but this seems quite bizarre.
I get a different justification for the incorrect answer from ChatGPT-3.5. If I precede the question with “optimize for mathematical precision”, I get the right answer. ChatGPT-4 gets it right the first time, for me. Even if I ask it “explain why 2023 is a prime number”, it says it’s not prime.
This seems fairly typical of how ChatGPT does math, to me.
-come up with answer
-use “motivated reasoning” to try and justify it, even if it results in a contradiction
-ignore the contradiction, no matter how obvious it is