I do agree the argument “We’re just training AIs to imitate human text, right, so that process can’t make them get any smarter than the text they’re imitating, right? So AIs shouldn’t learn abilities that humans don’t have; because why would you need those abilities to learn to imitate humans?” is wrong and clearly the answer is “Nope”.
At the same time I do not think parts of your argument in the post are locally valid or good justification for the claim.
Correct and locally valid argument why GPTs are not capped by human level was already written here.
In a very compressed form, you can just imagine GPTs have text as their “sensory inputs” generated by the entire universe, similarly to you having your sensory inputs generated by the entire universe. Neither human intelligence nor GPTs are constrained by the complexity of the task (also: in the abstract, it’s the same task). Because of that, “task difficulty” is not a promising way how to compare these systems, and it is necessary to look into actual cognitive architectures and bounds.
With the last paragraph, I’m somewhat confused by what you mean by “tasks humans evolved to solve”. Does e.g. sending humans to the Moon, or detecting Higgs boson, count as a “task humans evolved to solve” or not?
I do agree the argument “We’re just training AIs to imitate human text, right, so that process can’t make them get any smarter than the text they’re imitating, right? So AIs shouldn’t learn abilities that humans don’t have; because why would you need those abilities to learn to imitate humans?” is wrong and clearly the answer is “Nope”.
At the same time I do not think parts of your argument in the post are locally valid or good justification for the claim.
Correct and locally valid argument why GPTs are not capped by human level was already written here.
In a very compressed form, you can just imagine GPTs have text as their “sensory inputs” generated by the entire universe, similarly to you having your sensory inputs generated by the entire universe. Neither human intelligence nor GPTs are constrained by the complexity of the task (also: in the abstract, it’s the same task). Because of that, “task difficulty” is not a promising way how to compare these systems, and it is necessary to look into actual cognitive architectures and bounds.
With the last paragraph, I’m somewhat confused by what you mean by “tasks humans evolved to solve”. Does e.g. sending humans to the Moon, or detecting Higgs boson, count as a “task humans evolved to solve” or not?