Good examples to consider! Has there ever been a technology that has been banned or significantly held back via regulation that spits out piles of gold (not counting externalities) and that doesn’t have a next-best alternative that replicates 90%+ of the value of the original technology while avoiding most of the original technology’s downsides?
The only way I could see humanity successfully slowing down AGI capabilities progress is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially. Perhaps it takes time (a generation or more?) for human beings to even figure out what to do with a certain amount of new utility, such that even a tiny risk of disaster from AGI would motivate people to satisfice and content themselves with the “AI summer harvest” from narrow AI? Perhaps our best hope for giving us time to get AGI right is to squeeze all we can out of systems that are identifiably narrow-AI (while making sure to not fool ourselves that a supposed narrow-AI that we are building is actually AGI. I suppose this idea relies on there being a non-fuzzy, readily-discernable line between safe and bounteous narrow-AI and risky AGI).
I’ve had thoughts along similar lines, but worry that there is no clear line between safer narrower less-useful less-profitable AI and riskier more-profitable more-general AI. Seems like a really slippery slope with a lot of motivation for relevant actors to engage in motivated thinking to rationalize their actions.
is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially.
I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.
It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.
Another big aspect to this is that narrow AIs occupy ecological space and potentially may allow us to move the baseline level of technology closer to theoretical limits.
The AI superintelligence scenario implicitly is one where the baseline level of technology is far from the limit. Where the ASI can invent nanotechnology, or hack insecure computers, or living humans are just walking around and not wearing isolation suits, or the ASI can set up manufacturing centers on the ocean floor which is unmonitored and unoccupied.
If we had better baseline technology, if we already had the above, the ASI might not have enough of an edge to win. It can’t break the laws of physics. If baseline human + narrow AI technology were even half as good as the absolute limits it might be enough. If computers were actually secure because narrow AI constructed a proof that no possible input message to all the software could cause out of spec/undefined behavior.
Another thing that gets argued is the ASI would coordinate with all the other AIs in this world to betray humans. Which is a threat but if the AIs are narrow enough this may not actually be possible. If they are too myopic and focused on short term tasks, there is nothing an ASI can offer as a long term promise, the myopic narrow AI will forget any deals struck because it runs in limited duration sessions like current models.
Good examples to consider! Has there ever been a technology that has been banned or significantly held back via regulation that spits out piles of gold (not counting externalities) and that doesn’t have a next-best alternative that replicates 90%+ of the value of the original technology while avoiding most of the original technology’s downsides?
The only way I could see humanity successfully slowing down AGI capabilities progress is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially. Perhaps it takes time (a generation or more?) for human beings to even figure out what to do with a certain amount of new utility, such that even a tiny risk of disaster from AGI would motivate people to satisfice and content themselves with the “AI summer harvest” from narrow AI? Perhaps our best hope for giving us time to get AGI right is to squeeze all we can out of systems that are identifiably narrow-AI (while making sure to not fool ourselves that a supposed narrow-AI that we are building is actually AGI. I suppose this idea relies on there being a non-fuzzy, readily-discernable line between safe and bounteous narrow-AI and risky AGI).
I’ve had thoughts along similar lines, but worry that there is no clear line between safer narrower less-useful less-profitable AI and riskier more-profitable more-general AI. Seems like a really slippery slope with a lot of motivation for relevant actors to engage in motivated thinking to rationalize their actions.
I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.
It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.
Another big aspect to this is that narrow AIs occupy ecological space and potentially may allow us to move the baseline level of technology closer to theoretical limits.
The AI superintelligence scenario implicitly is one where the baseline level of technology is far from the limit. Where the ASI can invent nanotechnology, or hack insecure computers, or living humans are just walking around and not wearing isolation suits, or the ASI can set up manufacturing centers on the ocean floor which is unmonitored and unoccupied.
If we had better baseline technology, if we already had the above, the ASI might not have enough of an edge to win. It can’t break the laws of physics. If baseline human + narrow AI technology were even half as good as the absolute limits it might be enough. If computers were actually secure because narrow AI constructed a proof that no possible input message to all the software could cause out of spec/undefined behavior.
Another thing that gets argued is the ASI would coordinate with all the other AIs in this world to betray humans. Which is a threat but if the AIs are narrow enough this may not actually be possible. If they are too myopic and focused on short term tasks, there is nothing an ASI can offer as a long term promise, the myopic narrow AI will forget any deals struck because it runs in limited duration sessions like current models.