One idea for a first pass could be: Suppose you had a computer with 1000 times the computing power than the best supercomputer has. Would running your algorithm on that machine be dangerous on its own?
For example I think even with 1000x computing power the Deep learning type approach would be ok. It would just let you have really good image/voice/action recognizers. On the other hand consider Deep Mind’s general game playing program which plays a variety of simple video games near optimally including exploiting bugs. A system like this at 1000x power given decent models of parts of the world and robotics, may be hard to contain. So in summary, I would say a panel of experts, rating the danger of the program running with 1000x computing power would be an ok first pass.
I know the architecture of Deep Mind, (It’s reinforcement learning + deep learning, basically) and can guarantee you that 1000x computing power would have a hard time getting you to NES games, let alone anything dangerous.
As long as the computer is in its own simulated world, with no input from the outside world, we’re almost certainly safe. It cannot model the real world.
But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.
So I’d say there’s no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.
(The AI Box argument does not apply here)
The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.
As long as the computer is in its own simulated world, with no input from the outside world, we’re almost certainly safe. It cannot model the real world.
Note: given really really large computational resources, an AI can always “break out by breaking in”; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it’s running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.
I’d say that if it started running a huge number of simulations of physical realities, and analyzing the intelligence of beings that resulted, that would fall squarely into the ‘worrying level of intelligence’ category.
In fact if it started attempting to alter the physics of the virtual world it’s in at any level—either by finding some in-game way to hack the virtual world, or by running simulations of alternate physics—that would be incredibly worrying.
I know the architecture of Deep Mind, (It’s reinforcement learning + deep learning, basically) and can guarantee you that 1000x computing power would have a hard time getting you to NES games, let alone anything dangerous.
Supposing you did want to regulate AI research, how could you tell whether a program was getting close enough to AI to be dangerous?
One idea for a first pass could be: Suppose you had a computer with 1000 times the computing power than the best supercomputer has. Would running your algorithm on that machine be dangerous on its own?
For example I think even with 1000x computing power the Deep learning type approach would be ok. It would just let you have really good image/voice/action recognizers. On the other hand consider Deep Mind’s general game playing program which plays a variety of simple video games near optimally including exploiting bugs. A system like this at 1000x power given decent models of parts of the world and robotics, may be hard to contain. So in summary, I would say a panel of experts, rating the danger of the program running with 1000x computing power would be an ok first pass.
I know the architecture of Deep Mind, (It’s reinforcement learning + deep learning, basically) and can guarantee you that 1000x computing power would have a hard time getting you to NES games, let alone anything dangerous.
As long as the computer is in its own simulated world, with no input from the outside world, we’re almost certainly safe. It cannot model the real world.
But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.
So I’d say there’s no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.
(The AI Box argument does not apply here)
The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.
Note: given really really large computational resources, an AI can always “break out by breaking in”; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it’s running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.
I’d say that if it started running a huge number of simulations of physical realities, and analyzing the intelligence of beings that resulted, that would fall squarely into the ‘worrying level of intelligence’ category.
In fact if it started attempting to alter the physics of the virtual world it’s in at any level—either by finding some in-game way to hack the virtual world, or by running simulations of alternate physics—that would be incredibly worrying.
I know the architecture of Deep Mind, (It’s reinforcement learning + deep learning, basically) and can guarantee you that 1000x computing power would have a hard time getting you to NES games, let alone anything dangerous.