Relatively new here (hi) and without adequate ability to warp spacetime so that I may peruse all that EY has written on this topic, but am still wondering—Why pursue the idea that morality is hardwired, or that there is an absolute code of what is right or wrong?
Thou shall not kill—well, except is someone is trying to kill you.
To be brief—it seems to me that 1) Morality exists in a social context. 2) Morality is fluid, and can change/has changed over time. 3) If there is a primary moral imperative that underlies everything we know about morality, it seems that that imperative is SURVIVAL, of self first, kin second, group/species third.
Empathy exists because it is a useful survival skill. Altruism is a little harder to explain.
But what justifies the assumption that there IS an absolute (or even approximate) code of morality that can be hardwired and impervious to change?
The other thing I wonder about when reading EY on morality is—would you trust your AI to LEARN morality and moral codes in the same way a human does? (See Kohlberg’s Levels of Moral Reasoning.)Or would you presume that SOMETHING must be hardwired? If so, why?
(EY—Do you summarize your views on these points somewhere? Pointers to said location very much appreciated.)
Relatively new here (hi) and without adequate ability to warp spacetime so that I may peruse all that EY has written on this topic, but am still wondering—Why pursue the idea that morality is hardwired, or that there is an absolute code of what is right or wrong?
Thou shall not kill—well, except is someone is trying to kill you.
To be brief—it seems to me that 1) Morality exists in a social context. 2) Morality is fluid, and can change/has changed over time. 3) If there is a primary moral imperative that underlies everything we know about morality, it seems that that imperative is SURVIVAL, of self first, kin second, group/species third.
Empathy exists because it is a useful survival skill. Altruism is a little harder to explain.
But what justifies the assumption that there IS an absolute (or even approximate) code of morality that can be hardwired and impervious to change?
The other thing I wonder about when reading EY on morality is—would you trust your AI to LEARN morality and moral codes in the same way a human does? (See Kohlberg’s Levels of Moral Reasoning.)Or would you presume that SOMETHING must be hardwired? If so, why?
(EY—Do you summarize your views on these points somewhere? Pointers to said location very much appreciated.)