As long as the computer is in its own simulated world, with no input from the outside world, we’re almost certainly safe. It cannot model the real world.
But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.
So I’d say there’s no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.
(The AI Box argument does not apply here)
The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.
As long as the computer is in its own simulated world, with no input from the outside world, we’re almost certainly safe. It cannot model the real world.
Note: given really really large computational resources, an AI can always “break out by breaking in”; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it’s running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.
I’d say that if it started running a huge number of simulations of physical realities, and analyzing the intelligence of beings that resulted, that would fall squarely into the ‘worrying level of intelligence’ category.
In fact if it started attempting to alter the physics of the virtual world it’s in at any level—either by finding some in-game way to hack the virtual world, or by running simulations of alternate physics—that would be incredibly worrying.
As long as the computer is in its own simulated world, with no input from the outside world, we’re almost certainly safe. It cannot model the real world.
But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.
So I’d say there’s no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.
(The AI Box argument does not apply here)
The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.
Note: given really really large computational resources, an AI can always “break out by breaking in”; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it’s running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.
I’d say that if it started running a huge number of simulations of physical realities, and analyzing the intelligence of beings that resulted, that would fall squarely into the ‘worrying level of intelligence’ category.
In fact if it started attempting to alter the physics of the virtual world it’s in at any level—either by finding some in-game way to hack the virtual world, or by running simulations of alternate physics—that would be incredibly worrying.