This really depends on the definition of “smarter”. There is a valid sense in which Stockfish is “smarter” than any human. Likewise, there are many valid senses in which GPT-4 is “smarter” than some humans, and some valid senses in which GPT-4 is “smarter” than all humans (e. g., at token prediction). There will be senses in which GPT-5 will be “smarter” than a bigger fraction of humans compared to GPT-4, perhaps being smarter than Sam Altman under a bigger set of possible definitions of “smarter”.
Will that actually mean anything? Who knows.
By playing with definitions like this, Sam Altman can simultaneously inspire hype by implication (“GPT-5 will be a superintelligent AGI!!!”) and then, if GPT-5 underdelivers, avoid significant reputational losses by assigning a different meaning to his past words (“here’s a very specific sense in which GPT-5 is smarter than me, that’s what I meant, hype is out of control again, smh”). This is a classic tactic; basically a motte-and-bailey variant.
This really depends on the definition of “smarter”. There is a valid sense in which Stockfish is “smarter” than any human. Likewise, there are many valid senses in which GPT-4 is “smarter” than some humans, and some valid senses in which GPT-4 is “smarter” than all humans (e. g., at token prediction). There will be senses in which GPT-5 will be “smarter” than a bigger fraction of humans compared to GPT-4, perhaps being smarter than Sam Altman under a bigger set of possible definitions of “smarter”.
Will that actually mean anything? Who knows.
By playing with definitions like this, Sam Altman can simultaneously inspire hype by implication (“GPT-5 will be a superintelligent AGI!!!”) and then, if GPT-5 underdelivers, avoid significant reputational losses by assigning a different meaning to his past words (“here’s a very specific sense in which GPT-5 is smarter than me, that’s what I meant, hype is out of control again, smh”). This is a classic tactic; basically a motte-and-bailey variant.