is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially.
I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.
It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.
Another big aspect to this is that narrow AIs occupy ecological space and potentially may allow us to move the baseline level of technology closer to theoretical limits.
The AI superintelligence scenario implicitly is one where the baseline level of technology is far from the limit. Where the ASI can invent nanotechnology, or hack insecure computers, or living humans are just walking around and not wearing isolation suits, or the ASI can set up manufacturing centers on the ocean floor which is unmonitored and unoccupied.
If we had better baseline technology, if we already had the above, the ASI might not have enough of an edge to win. It can’t break the laws of physics. If baseline human + narrow AI technology were even half as good as the absolute limits it might be enough. If computers were actually secure because narrow AI constructed a proof that no possible input message to all the software could cause out of spec/undefined behavior.
Another thing that gets argued is the ASI would coordinate with all the other AIs in this world to betray humans. Which is a threat but if the AIs are narrow enough this may not actually be possible. If they are too myopic and focused on short term tasks, there is nothing an ASI can offer as a long term promise, the myopic narrow AI will forget any deals struck because it runs in limited duration sessions like current models.
I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.
It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.
Another big aspect to this is that narrow AIs occupy ecological space and potentially may allow us to move the baseline level of technology closer to theoretical limits.
The AI superintelligence scenario implicitly is one where the baseline level of technology is far from the limit. Where the ASI can invent nanotechnology, or hack insecure computers, or living humans are just walking around and not wearing isolation suits, or the ASI can set up manufacturing centers on the ocean floor which is unmonitored and unoccupied.
If we had better baseline technology, if we already had the above, the ASI might not have enough of an edge to win. It can’t break the laws of physics. If baseline human + narrow AI technology were even half as good as the absolute limits it might be enough. If computers were actually secure because narrow AI constructed a proof that no possible input message to all the software could cause out of spec/undefined behavior.
Another thing that gets argued is the ASI would coordinate with all the other AIs in this world to betray humans. Which is a threat but if the AIs are narrow enough this may not actually be possible. If they are too myopic and focused on short term tasks, there is nothing an ASI can offer as a long term promise, the myopic narrow AI will forget any deals struck because it runs in limited duration sessions like current models.