I think peak intelligence (peak capability to reach a goal) will not be limited by the amount of compute, raw data, or algorithmic capability to process the data well, but by the finite amount of reality that’s relevant to achieving that goal. If one wants to take over the world, the way internet infrastructure works is relevant. The exact diameters of all the stones in the Rhine river are not, and neither is the amount of red dwarves in the universe. If we’re lucky, the amount of reality that turns out to be relevant for taking over the world, is not too far beyond what humanity can already collectively process. I can see this as a way for the world to be saved by default (but don’t think it’s super likely). I do think this makes an ever-expanding giant pile of compute an unlikely outcome (but some other kind of ever-expanding AI-led force a lot more likely).
I think this is probably true, and yet I also don’t think that humans are likely anywhere near this peak intelligence level yet. Also, simply being able to think faster without being more knowledgeable or intelligent would be a significant strategic advantage in competition or conflict. Even that would hit a peak, where additional speed (all else held constant) would confer no further advantage.
Similarly, knowledge, like the diameters of river stones, has its own peak. That’s going to be much more context dependent though. Different knowledge is relevant to different problems. Some problems benefit from in-depth knowledge about them, others are knowledge-light.
So, intelligence (capacity to utilize knowledge, reason abstractly, concoct useful plans) and speed of thought are much more general capabilities. In humans, these three attributes tend to be highly entangled due to upstream causes like education and genetics. In AI, we see them come apart. Some very knowledgeable systems with excellent retrieval speed don’t seem very intelligent. Some intelligent systems are very slow or only very narrowly knowledgeable.
I think that main problem is that two main weak points (computer systems and humans) have increasing attack surface. I.e., if we introduce protective measures in software, we can end up in situation when protective measures themselves are sources of vulnerability, unless we are really sure that it’s not the case.
I think peak intelligence (peak capability to reach a goal) will not be limited by the amount of compute, raw data, or algorithmic capability to process the data well, but by the finite amount of reality that’s relevant to achieving that goal. If one wants to take over the world, the way internet infrastructure works is relevant. The exact diameters of all the stones in the Rhine river are not, and neither is the amount of red dwarves in the universe. If we’re lucky, the amount of reality that turns out to be relevant for taking over the world, is not too far beyond what humanity can already collectively process. I can see this as a way for the world to be saved by default (but don’t think it’s super likely). I do think this makes an ever-expanding giant pile of compute an unlikely outcome (but some other kind of ever-expanding AI-led force a lot more likely).
I think this is probably true, and yet I also don’t think that humans are likely anywhere near this peak intelligence level yet. Also, simply being able to think faster without being more knowledgeable or intelligent would be a significant strategic advantage in competition or conflict. Even that would hit a peak, where additional speed (all else held constant) would confer no further advantage.
Similarly, knowledge, like the diameters of river stones, has its own peak. That’s going to be much more context dependent though. Different knowledge is relevant to different problems. Some problems benefit from in-depth knowledge about them, others are knowledge-light.
So, intelligence (capacity to utilize knowledge, reason abstractly, concoct useful plans) and speed of thought are much more general capabilities. In humans, these three attributes tend to be highly entangled due to upstream causes like education and genetics. In AI, we see them come apart. Some very knowledgeable systems with excellent retrieval speed don’t seem very intelligent. Some intelligent systems are very slow or only very narrowly knowledgeable.
I think that main problem is that two main weak points (computer systems and humans) have increasing attack surface. I.e., if we introduce protective measures in software, we can end up in situation when protective measures themselves are sources of vulnerability, unless we are really sure that it’s not the case.
I’m now wondering whether this idea has already been worked out by someone (probably?) Any sources?