“Guarantees” how? I mean, if you’re writing your own AI, and you’re assuming that it’ll be hostile no matter what you do, then that is indeed evidence that you’ll end up making a (non-anthropomorphically) hostile AI or fail to make an AI at all; and if you’re writing an AI that you expect to be hostile, maybe you should just not write it. But a Friendly AI should still be Friendly even if the majority of people in the world expect it to be hostile.
People are friendly to dogs they assume to be friendly, and hostile to dogs they expect to be hostile.
Likewise if an AI has self-preservation as a higher value than the preservation of other life, it will eradicate other life that it’ll expect to be hostile to it.
In short by being too scared of AI, we’re increasing the risk that some kinds of AI (ones that value their own existence more than human life) will destroy us preemptively.
I suspect that for the vast majority AIs, either they do not have the capacity to be a threat to us, or we do not have the capacity to be a threat to them. There may or may not be a very thin window in which both sides could destroy the other if they chose to but even then a rapidly self improving AI would probably pass through that window before we noticed.
we’re increasing the risk that some kinds of AI (ones that value their own existence more than human life) will destroy us preemptively.
If we make an AI like that in the first place, then we’ve pretty much already lost. If it ever had to destroy us to protect its ability to carry out its top-level goals, then its top-level goals are clearly something other than humaneness, in which case it’ll most likely wipe us out when it’s powerful enough whether or not we ever felt threatened by it.
People are friendly to dogs they assume to be friendly, and hostile to dogs they expect to be hostile.
Likewise if an AI has self-preservation as a higher value than the preservation of other life, it will eradicate other life that it’ll expect to be hostile to it.
“People do X, therefore AIs will do X” is not a valid argument for AIs in general. It may apply to AIs that value self-preservation over the preservation of other life, but we shouldn’t make one of those. Also, what Benelliot said.
“Guarantees” how? I mean, if you’re writing your own AI, and you’re assuming that it’ll be hostile no matter what you do, then that is indeed evidence that you’ll end up making a (non-anthropomorphically) hostile AI or fail to make an AI at all; and if you’re writing an AI that you expect to be hostile, maybe you should just not write it. But a Friendly AI should still be Friendly even if the majority of people in the world expect it to be hostile.
People are friendly to dogs they assume to be friendly, and hostile to dogs they expect to be hostile.
Likewise if an AI has self-preservation as a higher value than the preservation of other life, it will eradicate other life that it’ll expect to be hostile to it.
In short by being too scared of AI, we’re increasing the risk that some kinds of AI (ones that value their own existence more than human life) will destroy us preemptively.
I suspect that for the vast majority AIs, either they do not have the capacity to be a threat to us, or we do not have the capacity to be a threat to them. There may or may not be a very thin window in which both sides could destroy the other if they chose to but even then a rapidly self improving AI would probably pass through that window before we noticed.
If we make an AI like that in the first place, then we’ve pretty much already lost. If it ever had to destroy us to protect its ability to carry out its top-level goals, then its top-level goals are clearly something other than humaneness, in which case it’ll most likely wipe us out when it’s powerful enough whether or not we ever felt threatened by it.
“People do X, therefore AIs will do X” is not a valid argument for AIs in general. It may apply to AIs that value self-preservation over the preservation of other life, but we shouldn’t make one of those. Also, what Benelliot said.