“the true Turing test is whether the AI kills us after we give it the chance, because this distinguishes it from a human”.
no, because a human might also kill you when you give them the chance. To pass the strong-form Turing Test it would have to make the same decision (probabilistically: have the same probability of doing it)
Of what use is this concept?
It is useful because we know what kind of outcomes happen when we put millions of humans together via human history, so “whether an AI will emulate human behavior under all circumstances” is useful.
Then of what use is the test? Of what use is this concept?
You seem to be saying “the true Turing test is whether the AI kills us after we give it the chance, because this distinguishes it from a human”.
Which essentially means you’re saying “aligned AI = aligned AI”
no, because a human might also kill you when you give them the chance. To pass the strong-form Turing Test it would have to make the same decision (probabilistically: have the same probability of doing it)
It is useful because we know what kind of outcomes happen when we put millions of humans together via human history, so “whether an AI will emulate human behavior under all circumstances” is useful.