Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
The difference between one and the other is just a matter of processing power and training data.