yes and no. several restatings of the same point, may contain 1am errors, challenge my view:
much is encoded in human culture, but individual humans regularly push far beyond culture based on confluence of good learning factors, and I do still think there’s more to know about the basis for strongly general algorithmic intelligence beyond “just throw a dense transformer at it haha”.
reinforcement learning and empowerment objectives ought to be able to reinvent disproportionately large amounts of culture from scratch, and we should not expect to spend decades at this stage of ai capability where ai is near human algorithmic intelligence but struggling to reach it.
deepmind’s strongest successes show that ability to map abstract spaces usefully maxes out well above human level, so while to some degree i agree about algorithmic intelligence being bottlenecked by cultural head start, the best algorithms for learning from culture should turn out to find the relevant parts of the human corpus much more efficiently than most humans and therefore also be able to push beyond human knowledge more efficiently.
I agree that the training data availability problem is real for current ai, but human level ai would only require human level amounts of training data, and while we’re doing really well at total capability, a human who saw everything gpt3 did would be much smarter. any human, I suspect, not just ones we think of as smart—trying hard to do that much hard stuff changes a person.
yes and no. several restatings of the same point, may contain 1am errors, challenge my view:
much is encoded in human culture, but individual humans regularly push far beyond culture based on confluence of good learning factors, and I do still think there’s more to know about the basis for strongly general algorithmic intelligence beyond “just throw a dense transformer at it haha”.
reinforcement learning and empowerment objectives ought to be able to reinvent disproportionately large amounts of culture from scratch, and we should not expect to spend decades at this stage of ai capability where ai is near human algorithmic intelligence but struggling to reach it.
deepmind’s strongest successes show that ability to map abstract spaces usefully maxes out well above human level, so while to some degree i agree about algorithmic intelligence being bottlenecked by cultural head start, the best algorithms for learning from culture should turn out to find the relevant parts of the human corpus much more efficiently than most humans and therefore also be able to push beyond human knowledge more efficiently.
I agree that the training data availability problem is real for current ai, but human level ai would only require human level amounts of training data, and while we’re doing really well at total capability, a human who saw everything gpt3 did would be much smarter. any human, I suspect, not just ones we think of as smart—trying hard to do that much hard stuff changes a person.