It’s incomputable because the Solomonoff prior is, but you can approximate it—to arbitrary precision if you’ve got the processing power, though that’s a big “if”—with statistical methods. Searching Github for the Monte Carlo approximations of AIXI that eli_sennesh mentioned turned up at least a dozen or so before I got bored.
Most of them seem to operate on tightly bounded problems, intelligently enough. I haven’t tried running one with fewer constraints (maybe eli has?), but I’d expect it to scribble over anything it could get its little paws on.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
It’s incomputable because the Solomonoff prior is, but you can approximate it—to arbitrary precision if you’ve got the processing power, though that’s a big “if”—with statistical methods. Searching Github for the Monte Carlo approximations of AIXI that eli_sennesh mentioned turned up at least a dozen or so before I got bored.
Most of them seem to operate on tightly bounded problems, intelligently enough. I haven’t tried running one with fewer constraints (maybe eli has?), but I’d expect it to scribble over anything it could get its little paws on.
But people do run these things that aren’t actually AIXIs , and they haven’t actually taken over the world, so they aren’t actually dangerous.
So there is no actually dangerous actual .AI.
...it’s not dangerous until it actually tries to take over the world?
I can think of plenty of ways in which an AI can be dangerous without taking that step.
The you had better tell people not to download and run AIXI approximation.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
The difference between one and the other is just a matter of processing power and training data.