This is not a criticism of your presentation, but rather the presuppositions of the debate itself. As someone who thinks that at the root of ethics are moral sentiments, I have a hard time picturing an intelligent being doing moral reasoning without feeling such sentiments. I suspect that researchers do not want to go out of their way to give AIs affective mental states, much less anything like the full range of human moral emotions, like anger, indignation, empathy, outrage, shame and disgust. The idea seems to be if the AI is programmed with certain preference values for ranges of outcomes, that’s all the ethics it needs.
If that’s the way it goes then I’d prefer that the AI not be able to deliberate about values at all, though that might be hard to avoid if it’s superintelligent. What makes humans somewhat ethically predictable and mostly not monstrous is that our ethical decisions are grounded in a human moral psychology, which has its own reward system. Without the grounding, I worry that an AI left to its own devices could go off the rails in ways that humans find hard to imagine. Yes, many of our human moral emotions actually make it more difficult to do the right thing. If I were re-designing people, or designing AIs, I’d redo the weights of human moral emotions to strengthen sympathy, philanthropy and an urge for fairness. I’d basically be aiming to make an artificial, superintelligent Hume. An AI that I can trust with moral reasoning would have to have a good character—which cannot happen without the right mixture of moral emotions.
This is not a criticism of your presentation, but rather the presuppositions of the debate itself. As someone who thinks that at the root of ethics are moral sentiments, I have a hard time picturing an intelligent being doing moral reasoning without feeling such sentiments. I suspect that researchers do not want to go out of their way to give AIs affective mental states, much less anything like the full range of human moral emotions, like anger, indignation, empathy, outrage, shame and disgust. The idea seems to be if the AI is programmed with certain preference values for ranges of outcomes, that’s all the ethics it needs.
If that’s the way it goes then I’d prefer that the AI not be able to deliberate about values at all, though that might be hard to avoid if it’s superintelligent. What makes humans somewhat ethically predictable and mostly not monstrous is that our ethical decisions are grounded in a human moral psychology, which has its own reward system. Without the grounding, I worry that an AI left to its own devices could go off the rails in ways that humans find hard to imagine. Yes, many of our human moral emotions actually make it more difficult to do the right thing. If I were re-designing people, or designing AIs, I’d redo the weights of human moral emotions to strengthen sympathy, philanthropy and an urge for fairness. I’d basically be aiming to make an artificial, superintelligent Hume. An AI that I can trust with moral reasoning would have to have a good character—which cannot happen without the right mixture of moral emotions.