Anthropomorphizing a little, I tentatively advance that ChatGPT knows the right answer, but uses a different reasoning process (part of its “brain”) to explain what the answer is.
Humans do that all the time, so it’s no surprise that ChatGPT would do it as well.
Often we believe that something is the right answer because we have lots of different evidence that would not be possible to summarize in a few paragraphs.
That’s especially true for ChatGPT as well. It might believe that something is the right answer because 10,000 experts believe in its training data that it’s the right answer and not because of a chain of reasoning.
Humans do that all the time, so it’s no surprise that ChatGPT would do it as well.
Often we believe that something is the right answer because we have lots of different evidence that would not be possible to summarize in a few paragraphs.
That’s especially true for ChatGPT as well. It might believe that something is the right answer because 10,000 experts believe in its training data that it’s the right answer and not because of a chain of reasoning.