Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
But people do run these things that aren’t actually AIXIs , and they haven’t actually taken over the world, so they aren’t actually dangerous.
So there is no actually dangerous actual .AI.
...it’s not dangerous until it actually tries to take over the world?
I can think of plenty of ways in which an AI can be dangerous without taking that step.
The you had better tell people not to download and run AIXI approximation.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
The difference between one and the other is just a matter of processing power and training data.