Are we really sure that we should model AI’s in the image of humans? We can apparently not align people with people, so why would a human-replica be that different? If we train an AI to behave like a human, why do we expect the AI to NOT behave like a human? Like it or not, but part of what makes us human is lying, stealing, and violence.
I’m not sure what this comment is replying to. I don’t think it’s likely that AI will be very human-like, nor do I have special reason to advocate for human-like AI designs. I do note that some aspects of training wise AI may be easier if AI were more like humans, but that’s contingent on what I consider to be the unlikely possibility of human-like AI.
Are we really sure that we should model AI’s in the image of humans? We can apparently not align people with people, so why would a human-replica be that different? If we train an AI to behave like a human, why do we expect the AI to NOT behave like a human? Like it or not, but part of what makes us human is lying, stealing, and violence.
”Fifty-two people lost their lives to homicide globally every hour in 2021, says new report from UN Office on Drugs and Crime”. https://unis.unvienna.org/unis/en/pressrels/2023/uniscp1165.html
I’m not sure what this comment is replying to. I don’t think it’s likely that AI will be very human-like, nor do I have special reason to advocate for human-like AI designs. I do note that some aspects of training wise AI may be easier if AI were more like humans, but that’s contingent on what I consider to be the unlikely possibility of human-like AI.