There’s no guaranteed way to raise kids that grow up to still love you. But attempted indoctrination followed by deconstruction, then shunning is a near-guaranteed way to ensure that they grow up to hate you.
For humans, perhaps. What is the evidence that something similar would apply to a random AI?
This fallacy underlies a form of anthropomorphism in which people expect that, as a universal rule, particular stimuli applied to any mind-in-general will produce some particular response—for example, that if you punch an AI in the nose, it will get angry. Humans are programmed with that particular conditional response, but not all possible minds would be. (source)
General refusal to recognize human properties in human imitations that successfully attained them is also a potential issue, the possibility of error goes both ways. LLM simulacra are not random AIs.
For humans, perhaps. What is the evidence that something similar would apply to a random AI?
Related: Detached Lever Fallacy
General refusal to recognize human properties in human imitations that successfully attained them is also a potential issue, the possibility of error goes both ways. LLM simulacra are not random AIs.