AI systems, such as Large Language Models (LLMs), are trained on human data and designed by human engineers. It’s impossible for them to exceed the bounds of human knowledge and expertise, as they’re inherently limited by the information they’ve been exposed to.
Maybe, on current algorithms, LLMs run into a plateau around the level of human expertise. That does seem plausible. But it is not because being trained on human data necessarily limits you to human level!
Accurately predicting human text is much harder than just writing stuff on the internet. If GPT were to perfect the skill it is being trained on, it would have to be much smarter than a human!
Maybe, on current algorithms, LLMs run into a plateau around the level of human expertise. That does seem plausible. But it is not because being trained on human data necessarily limits you to human level!
Accurately predicting human text is much harder than just writing stuff on the internet. If GPT were to perfect the skill it is being trained on, it would have to be much smarter than a human!