Okay, I’m gonna take my skeptical shot at the argument, I hope you don’t mind!
an AI that is *better than people at achieving arbitrary goals in the real world* would be a very scary thing, because whatever the AI tried to do would then actually happen
It’s not true that whatever the AI tried to do would happen. What if an AI wanted to travel faster than the speed of light, or prove that 2+2=5, or destroy the sun within 1 second of being turned on?
You can’t just say “arbitrary goals”, you have to actually explain what goals there are that would be realistically achievable by an realistic AI that could be actually built in the near future. If those abilities fall short of “destroy all of humanity”, then there is no x-risk.
As stories of magically granted wishes and sci-fi dystopias point out, it’s really hard to specify a goal that can’t backfire
This is fictional evidence. Genies don’t exist, and if they did, it probably wouldn’t be that hard to add enough caveats to your wish to prevent global genocide. A counterexample might be the use of laws: sure, there are loopholes, but not big enough that the law would let you off on a broad daylight killing spree.
Current AI systems certainly fall far short of being able to achieve arbitrary goals in the real world better than people, but there’s nothing in physics or mathematics that says such an AI is *impossible*
Well, there is laws of physics and maths that put limits on available computational power, which in turn puts a limit on what an AI can actually achieve. For example, a perfect Bayesian reasoner is forbidden by the laws of mathematics.
Okay, I’m gonna take my skeptical shot at the argument, I hope you don’t mind!
It’s not true that whatever the AI tried to do would happen. What if an AI wanted to travel faster than the speed of light, or prove that 2+2=5, or destroy the sun within 1 second of being turned on?
You can’t just say “arbitrary goals”, you have to actually explain what goals there are that would be realistically achievable by an realistic AI that could be actually built in the near future. If those abilities fall short of “destroy all of humanity”, then there is no x-risk.
This is fictional evidence. Genies don’t exist, and if they did, it probably wouldn’t be that hard to add enough caveats to your wish to prevent global genocide. A counterexample might be the use of laws: sure, there are loopholes, but not big enough that the law would let you off on a broad daylight killing spree.
Well, there is laws of physics and maths that put limits on available computational power, which in turn puts a limit on what an AI can actually achieve. For example, a perfect Bayesian reasoner is forbidden by the laws of mathematics.
Also, it’s an argument from selective stupidity. An ASI doesn’t have to interpret things literally as result of cognitive limitation.