Are you sure that superintelligent AIs would have a “correct ontology/semantics”?
it’s hard to imagine a superintelligent AGI that didn’t know basic facts about the world like “trees have roots underground” or “most human beings sleep at night”.
They would have to have a useful one, in order to achieve their goals
Useful models of reality (useful in the sense of achieving goals) tend to be ones that are accurate. This is especially true of a single agent that isn’t subject to the weird foibles of human psychology and isn’t mainly achieving things via signalling like many humans do.
The reason I made the point about having a correct understanding of the world, for example knowing what the term “Nazi” actually means, is that Tay has not achieved the status of being “unfriendly”, because it doesn’t actually have anything that could reasonably be called goals pertaining to the world. Tay is not even an unfriendly infra-intelligence. Though I’d be very interested if someone managed to make one.
it’s hard to imagine a superintelligent AGI that didn’t know basic facts about the world like “trees have roots underground” or “most human beings sleep at night”.
Useful models of reality (useful in the sense of achieving goals) tend to be ones that are accurate. This is especially true of a single agent that isn’t subject to the weird foibles of human psychology and isn’t mainly achieving things via signalling like many humans do.
The reason I made the point about having a correct understanding of the world, for example knowing what the term “Nazi” actually means, is that Tay has not achieved the status of being “unfriendly”, because it doesn’t actually have anything that could reasonably be called goals pertaining to the world. Tay is not even an unfriendly infra-intelligence. Though I’d be very interested if someone managed to make one.