Putting a claim into ChatGPT and getting “correct” is little evidence that the claim is correct.
Unless your claims are heavily preselected e.g. for being confusing ones where controversial wisdom is wrong, I think this specific example is inaccurate? If I ask ChatGPT ‘Is Sarajevo the capital of Albania?’, I expect it to be right a large majority of the time.
Fixed, thanks. I implicitly assumed that all ChatGPT use we cared about was about complicated, confusing topics, where “correct” would be little evidence.
Unless your claims are heavily preselected e.g. for being confusing ones where controversial wisdom is wrong, I think this specific example is inaccurate? If I ask ChatGPT ‘Is Sarajevo the capital of Albania?’, I expect it to be right a large majority of the time.
Fixed, thanks. I implicitly assumed that all ChatGPT use we cared about was about complicated, confusing topics, where “correct” would be little evidence.