Another point is that it may be speed of thought, action, and access to information that bottlenecks human productive activites—that these are the generators of the quality of human thought. The difference between you and Von Neumann isn’t necessarily that each of his thoughts was magically higher-quality than yours. It’s that his brain created (and probably pruned) thoughts at a much higher rate than yours, which left him with a lot more high quality thoughts per unit time. As a result, he was also able to figure out what information would be most useful to access in order to continue being even more productive.
Genius is just ordinary thinking performed at a faster rate and for a longer time.
GPT-4 is bottlenecked by its access to information and long-term memory. AutoGPT loosens or eliminates those bottlenecks. When AutoGPT’s creators figure out how to more effectively prune its ineffective actions and if costs come down, then we’ll probably have a full-on AGI on our hands.
I think there’s definitely some truth to this sometimes, but I don’t think you’ve correctly described the main driver of genius. I actually think it’s the opposite: my guess is that there’s a limit to thinking speed, and genius exists precisely because some people just have better thoughts. Even Von Neumann himself attributed much of his abilities to intuition. He would go to sleep and in the morning he would have the answer to whatever problem he was toiling over.
I think, instead, that ideas for the most part emerge through some deep and incomprehensible heuristics in our brains. Think about a chess master recognizing the next move at just a glance. However much training it took to give him that ability, he is not doing a tree search at that moment. It’s not hard to imagine a hypothetical where his brain, with no training, came pre-configured to make the same decisions, and indeed I think that’s more or less what happens with Chess prodigies. They don’t come preconfigured, but their brains are better primed to develop those intuitions.
In other words, I think that genius is making better connections with the same number of “cycles”, and I think there’s evidence that LLMs do this too as they advance. For instance, part of the significance of DeepMind’s Chinchilla paper was that by training longer they were able to get better performance in a smaller network. The only explanation for this is that the quality of the processing had improved enough to counteract the effects of the lost quantity.
Speculative:
Another point is that it may be speed of thought, action, and access to information that bottlenecks human productive activites—that these are the generators of the quality of human thought. The difference between you and Von Neumann isn’t necessarily that each of his thoughts was magically higher-quality than yours. It’s that his brain created (and probably pruned) thoughts at a much higher rate than yours, which left him with a lot more high quality thoughts per unit time. As a result, he was also able to figure out what information would be most useful to access in order to continue being even more productive.
Genius is just ordinary thinking performed at a faster rate and for a longer time.
GPT-4 is bottlenecked by its access to information and long-term memory. AutoGPT loosens or eliminates those bottlenecks. When AutoGPT’s creators figure out how to more effectively prune its ineffective actions and if costs come down, then we’ll probably have a full-on AGI on our hands.
I think there’s definitely some truth to this sometimes, but I don’t think you’ve correctly described the main driver of genius. I actually think it’s the opposite: my guess is that there’s a limit to thinking speed, and genius exists precisely because some people just have better thoughts. Even Von Neumann himself attributed much of his abilities to intuition. He would go to sleep and in the morning he would have the answer to whatever problem he was toiling over.
I think, instead, that ideas for the most part emerge through some deep and incomprehensible heuristics in our brains. Think about a chess master recognizing the next move at just a glance. However much training it took to give him that ability, he is not doing a tree search at that moment. It’s not hard to imagine a hypothetical where his brain, with no training, came pre-configured to make the same decisions, and indeed I think that’s more or less what happens with Chess prodigies. They don’t come preconfigured, but their brains are better primed to develop those intuitions.
In other words, I think that genius is making better connections with the same number of “cycles”, and I think there’s evidence that LLMs do this too as they advance. For instance, part of the significance of DeepMind’s Chinchilla paper was that by training longer they were able to get better performance in a smaller network. The only explanation for this is that the quality of the processing had improved enough to counteract the effects of the lost quantity.