So why don’t humans shop lift? … People are born into this world with virtually no alignment, and gradually construct their own ethical system based on interactions with other people around them, most importantly their parents (or other social guardians).
I believe this ignores the most important part: humans are born with a potential for empathy (which is further shaped by their interactions of people around them).
If the AI is born without this potential, there is nothing to shape. (Also, here.)
Looking at the human example, there is a certain fraction of population born as psychopaths, and despite getting similar interactions, they grow up differently. Which shows that the capacities you are born with matter at least as much as the upbringing.
(This entire line of thinking seems to me like wishful thinking: If we treat the AI as a human baby, it will magically gain the capabilities—empathy, mirroring—of a human baby, and will grow up accordingly. No, it won’t. You don’t even need a superhuman AI to verify this; try the same experiment with a spider—who is more similar to humans than an AI—and observe the results.)
The implication that I didn’t think to spell out is that the AI should be programmed with the capacity for empathy. It’s more of a proposal of system design than a proposal of governance. Granted, the specifics of that design would be its own discussion entirely
I believe this ignores the most important part: humans are born with a potential for empathy (which is further shaped by their interactions of people around them).
If the AI is born without this potential, there is nothing to shape. (Also, here.)
Looking at the human example, there is a certain fraction of population born as psychopaths, and despite getting similar interactions, they grow up differently. Which shows that the capacities you are born with matter at least as much as the upbringing.
(This entire line of thinking seems to me like wishful thinking: If we treat the AI as a human baby, it will magically gain the capabilities—empathy, mirroring—of a human baby, and will grow up accordingly. No, it won’t. You don’t even need a superhuman AI to verify this; try the same experiment with a spider—who is more similar to humans than an AI—and observe the results.)
The implication that I didn’t think to spell out is that the AI should be programmed with the capacity for empathy. It’s more of a proposal of system design than a proposal of governance. Granted, the specifics of that design would be its own discussion entirely