In summary:
Creating an agent was apparently already a solved problem, just missing a robust method of generating ideas/plans that are even vaguely possible.
Star Trek (and other Sci fi) continues to be surprisingly prescient, and “Computer, create an adversary capable of outwitting Data” creating an agen AI is actually completely realistic for 24th century technology.
Our only hopes are:
The accumulated knowledge of humanity is sufficient to create AIs with an equivalent of IQ of 200, but not 2000.
Governments step in and ban things.
Adversarial action keeps things from going pear shaped (winning against nature is much easier than winning against other agents—just ask any physisit who tried to win the stock market)
Chimps still have it pretty good, at least by thier own standards, even though we took over the world.
Another point is that it may be speed of thought, action, and access to information that bottlenecks human productive activites—that these are the generators of the quality of human thought. The difference between you and Von Neumann isn’t necessarily that each of his thoughts was magically higher-quality than yours. It’s that his brain created (and probably pruned) thoughts at a much higher rate than yours, which left him with a lot more high quality thoughts per unit time. As a result, he was also able to figure out what information would be most useful to access in order to continue being even more productive.
Genius is just ordinary thinking performed at a faster rate and for a longer time.
GPT-4 is bottlenecked by its access to information and long-term memory. AutoGPT loosens or eliminates those bottlenecks. When AutoGPT’s creators figure out how to more effectively prune its ineffective actions and if costs come down, then we’ll probably have a full-on AGI on our hands.
I think there’s definitely some truth to this sometimes, but I don’t think you’ve correctly described the main driver of genius. I actually think it’s the opposite: my guess is that there’s a limit to thinking speed, and genius exists precisely because some people just have better thoughts. Even Von Neumann himself attributed much of his abilities to intuition. He would go to sleep and in the morning he would have the answer to whatever problem he was toiling over.
I think, instead, that ideas for the most part emerge through some deep and incomprehensible heuristics in our brains. Think about a chess master recognizing the next move at just a glance. However much training it took to give him that ability, he is not doing a tree search at that moment. It’s not hard to imagine a hypothetical where his brain, with no training, came pre-configured to make the same decisions, and indeed I think that’s more or less what happens with Chess prodigies. They don’t come preconfigured, but their brains are better primed to develop those intuitions.
In other words, I think that genius is making better connections with the same number of “cycles”, and I think there’s evidence that LLMs do this too as they advance. For instance, part of the significance of DeepMind’s Chinchilla paper was that by training longer they were able to get better performance in a smaller network. The only explanation for this is that the quality of the processing had improved enough to counteract the effects of the lost quantity.
In summary: Creating an agent was apparently already a solved problem, just missing a robust method of generating ideas/plans that are even vaguely possible.
Star Trek (and other Sci fi) continues to be surprisingly prescient, and “Computer, create an adversary capable of outwitting Data” creating an agen AI is actually completely realistic for 24th century technology.
Our only hopes are:
The accumulated knowledge of humanity is sufficient to create AIs with an equivalent of IQ of 200, but not 2000.
Governments step in and ban things.
Adversarial action keeps things from going pear shaped (winning against nature is much easier than winning against other agents—just ask any physisit who tried to win the stock market)
Chimps still have it pretty good, at least by thier own standards, even though we took over the world.
Speculative:
Another point is that it may be speed of thought, action, and access to information that bottlenecks human productive activites—that these are the generators of the quality of human thought. The difference between you and Von Neumann isn’t necessarily that each of his thoughts was magically higher-quality than yours. It’s that his brain created (and probably pruned) thoughts at a much higher rate than yours, which left him with a lot more high quality thoughts per unit time. As a result, he was also able to figure out what information would be most useful to access in order to continue being even more productive.
Genius is just ordinary thinking performed at a faster rate and for a longer time.
GPT-4 is bottlenecked by its access to information and long-term memory. AutoGPT loosens or eliminates those bottlenecks. When AutoGPT’s creators figure out how to more effectively prune its ineffective actions and if costs come down, then we’ll probably have a full-on AGI on our hands.
I think there’s definitely some truth to this sometimes, but I don’t think you’ve correctly described the main driver of genius. I actually think it’s the opposite: my guess is that there’s a limit to thinking speed, and genius exists precisely because some people just have better thoughts. Even Von Neumann himself attributed much of his abilities to intuition. He would go to sleep and in the morning he would have the answer to whatever problem he was toiling over.
I think, instead, that ideas for the most part emerge through some deep and incomprehensible heuristics in our brains. Think about a chess master recognizing the next move at just a glance. However much training it took to give him that ability, he is not doing a tree search at that moment. It’s not hard to imagine a hypothetical where his brain, with no training, came pre-configured to make the same decisions, and indeed I think that’s more or less what happens with Chess prodigies. They don’t come preconfigured, but their brains are better primed to develop those intuitions.
In other words, I think that genius is making better connections with the same number of “cycles”, and I think there’s evidence that LLMs do this too as they advance. For instance, part of the significance of DeepMind’s Chinchilla paper was that by training longer they were able to get better performance in a smaller network. The only explanation for this is that the quality of the processing had improved enough to counteract the effects of the lost quantity.
I think we ban research on embodied AI and let GPT takeover Metaverse. Win win
Edit: and also nuke underground robot research facilities? Doesn’t seem like a great idea in hindsight