I am sympathetic to this argument, though I’m less credent than you in moral realism (I still assign the most credence to it out of all meta-ethical theories and think it’s what we should act on). My main worry is that an AI system won’t have access to the moral facts, because it won’t be able to experience pleasure and suffering at all. And like you, I’m not fully credent in moral realism or the realist’s wager, which means that even if an AI system were to be sentient, there’s still a risk that it’s amoral.
I am sympathetic to this argument, though I’m less credent than you in moral realism (I still assign the most credence to it out of all meta-ethical theories and think it’s what we should act on). My main worry is that an AI system won’t have access to the moral facts, because it won’t be able to experience pleasure and suffering at all. And like you, I’m not fully credent in moral realism or the realist’s wager, which means that even if an AI system were to be sentient, there’s still a risk that it’s amoral.
I address this worry in the section titled “But the AI won’t be conscious”