If that’s your belief, I think you should edit in a disclaimer to your TL;DR section, like “Gemini and GPT-4 authors report results close to or matching human performance at 95%, though I don’t trust their methodology”.
Also, the numbers aren’t “non-provable”: anyone could just replicate them with the GPT-4 API! (Modulo dataset contamination considerations.)
Thanks for the recommendation, though I’ll think of a more fundamental solution to satisfy all ethical/communal concerns.
”Gemini and GPT-4 authors report results close to or matching human performance at 95%, though I don’t trust their methodology.” Regarding this, just to sort everything out, because I’m writing under my real name, I do trust the authors and ethics of both OpenAI and DeepMind. It’s just me questioning everything when I still can as a student. But I’ll make sure not to cause any further confusion, as you recommended!
If that’s your belief, I think you should edit in a disclaimer to your TL;DR section, like “Gemini and GPT-4 authors report results close to or matching human performance at 95%, though I don’t trust their methodology”.
Also, the numbers aren’t “non-provable”: anyone could just replicate them with the GPT-4 API! (Modulo dataset contamination considerations.)
Thanks for the recommendation, though I’ll think of a more fundamental solution to satisfy all ethical/communal concerns.
”Gemini and GPT-4 authors report results close to or matching human performance at 95%, though I don’t trust their methodology.” Regarding this, just to sort everything out, because I’m writing under my real name, I do trust the authors and ethics of both OpenAI and DeepMind. It’s just me questioning everything when I still can as a student. But I’ll make sure not to cause any further confusion, as you recommended!