I get a different justification for the incorrect answer from ChatGPT-3.5. If I precede the question with “optimize for mathematical precision”, I get the right answer. ChatGPT-4 gets it right the first time, for me. Even if I ask it “explain why 2023 is a prime number”, it says it’s not prime.
I get a different justification for the incorrect answer from ChatGPT-3.5. If I precede the question with “optimize for mathematical precision”, I get the right answer. ChatGPT-4 gets it right the first time, for me. Even if I ask it “explain why 2023 is a prime number”, it says it’s not prime.