Another big aspect to this is that narrow AIs occupy ecological space and potentially may allow us to move the baseline level of technology closer to theoretical limits.
The AI superintelligence scenario implicitly is one where the baseline level of technology is far from the limit. Where the ASI can invent nanotechnology, or hack insecure computers, or living humans are just walking around and not wearing isolation suits, or the ASI can set up manufacturing centers on the ocean floor which is unmonitored and unoccupied.
If we had better baseline technology, if we already had the above, the ASI might not have enough of an edge to win. It can’t break the laws of physics. If baseline human + narrow AI technology were even half as good as the absolute limits it might be enough. If computers were actually secure because narrow AI constructed a proof that no possible input message to all the software could cause out of spec/undefined behavior.
Another thing that gets argued is the ASI would coordinate with all the other AIs in this world to betray humans. Which is a threat but if the AIs are narrow enough this may not actually be possible. If they are too myopic and focused on short term tasks, there is nothing an ASI can offer as a long term promise, the myopic narrow AI will forget any deals struck because it runs in limited duration sessions like current models.
Another big aspect to this is that narrow AIs occupy ecological space and potentially may allow us to move the baseline level of technology closer to theoretical limits.
The AI superintelligence scenario implicitly is one where the baseline level of technology is far from the limit. Where the ASI can invent nanotechnology, or hack insecure computers, or living humans are just walking around and not wearing isolation suits, or the ASI can set up manufacturing centers on the ocean floor which is unmonitored and unoccupied.
If we had better baseline technology, if we already had the above, the ASI might not have enough of an edge to win. It can’t break the laws of physics. If baseline human + narrow AI technology were even half as good as the absolute limits it might be enough. If computers were actually secure because narrow AI constructed a proof that no possible input message to all the software could cause out of spec/undefined behavior.
Another thing that gets argued is the ASI would coordinate with all the other AIs in this world to betray humans. Which is a threat but if the AIs are narrow enough this may not actually be possible. If they are too myopic and focused on short term tasks, there is nothing an ASI can offer as a long term promise, the myopic narrow AI will forget any deals struck because it runs in limited duration sessions like current models.