9 years since the last comment—I’m interested in how this argument interacts with GPT-4 class LLMs, and “scale is all you need”.
Sure, LLMs are not evolved in the same way as biological systems, so the path towards smarter LLMs aren’t fragile in the way brains are described in this article, where maybe the first augmentation works, but the second leads to psychosis.
But LLMs are trained on writing done by biological systems with intelligence that was evolved with constraints.
So what does this say about the ability to scale up training on this human data in an attempt to reach superhuman intelligence?
9 years since the last comment—I’m interested in how this argument interacts with GPT-4 class LLMs, and “scale is all you need”.
Sure, LLMs are not evolved in the same way as biological systems, so the path towards smarter LLMs aren’t fragile in the way brains are described in this article, where maybe the first augmentation works, but the second leads to psychosis.
But LLMs are trained on writing done by biological systems with intelligence that was evolved with constraints.
So what does this say about the ability to scale up training on this human data in an attempt to reach superhuman intelligence?