I sense a contradiction here. Or does he thinks the first superhuman optimization process would probably not take over the world as quickly as it can? Unless that’s specifically encoded in its utility function, that hardly sounds like the rational choice.
It’s at least partially a matter of how quick that actually is. Consider that the world is a big place, and there are currently significant power differentials.
There might be all sorts of practical issues that an AGI that lacks physical means could stumble on.
The whole scenario is highly dependent on what technologies of robotics exist, what sort of networks are in place, etc.
Wait, Jürgen Schmidhuber seems to:
Believe in hard takeoff
Not believe in Singleton AI.
I sense a contradiction here. Or does he thinks the first superhuman optimization process would probably not take over the world as quickly as it can? Unless that’s specifically encoded in its utility function, that hardly sounds like the rational choice.
It’s at least partially a matter of how quick that actually is. Consider that the world is a big place, and there are currently significant power differentials.
There might be all sorts of practical issues that an AGI that lacks physical means could stumble on.
The whole scenario is highly dependent on what technologies of robotics exist, what sort of networks are in place, etc.