Tay doesn’t tell us much about deliberate Un-Friendliness. But Tay does tell us that a well-intentioned effort to make an innocent, harmless AI can go wrong for unexpected reasons. Even for reasons that, in hindsight, are obvious.
Are you sure that superintelligent AIs would have a “correct ontology/semantics”? They would have to have a useful one, in order to achieve their goals, but both philosophers and scientists have had incorrect conceptualizations that nevertheless matched the real world closely enough to be productive. And for an un-Friendly AI, “productive” translates to “using your atoms for its own purposes.”
Are you sure that superintelligent AIs would have a “correct ontology/semantics”?
it’s hard to imagine a superintelligent AGI that didn’t know basic facts about the world like “trees have roots underground” or “most human beings sleep at night”.
They would have to have a useful one, in order to achieve their goals
Useful models of reality (useful in the sense of achieving goals) tend to be ones that are accurate. This is especially true of a single agent that isn’t subject to the weird foibles of human psychology and isn’t mainly achieving things via signalling like many humans do.
The reason I made the point about having a correct understanding of the world, for example knowing what the term “Nazi” actually means, is that Tay has not achieved the status of being “unfriendly”, because it doesn’t actually have anything that could reasonably be called goals pertaining to the world. Tay is not even an unfriendly infra-intelligence. Though I’d be very interested if someone managed to make one.
Tay doesn’t tell us much about deliberate Un-Friendliness. But Tay does tell us that a well-intentioned effort to make an innocent, harmless AI can go wrong for unexpected reasons. Even for reasons that, in hindsight, are obvious.
Are you sure that superintelligent AIs would have a “correct ontology/semantics”? They would have to have a useful one, in order to achieve their goals, but both philosophers and scientists have had incorrect conceptualizations that nevertheless matched the real world closely enough to be productive. And for an un-Friendly AI, “productive” translates to “using your atoms for its own purposes.”
it’s hard to imagine a superintelligent AGI that didn’t know basic facts about the world like “trees have roots underground” or “most human beings sleep at night”.
Useful models of reality (useful in the sense of achieving goals) tend to be ones that are accurate. This is especially true of a single agent that isn’t subject to the weird foibles of human psychology and isn’t mainly achieving things via signalling like many humans do.
The reason I made the point about having a correct understanding of the world, for example knowing what the term “Nazi” actually means, is that Tay has not achieved the status of being “unfriendly”, because it doesn’t actually have anything that could reasonably be called goals pertaining to the world. Tay is not even an unfriendly infra-intelligence. Though I’d be very interested if someone managed to make one.