Which cluster is that: Agents currently acknowledging a morality similar to yours (with “capacity” referring to their choice on whether or not to act according to those nominal beliefs at any give time)? Agents who would be moved by your moral arguments (even if those arguments haven’t yet been presented to them)? Anything Turing-complete (even if not currently running an algorithm that has anything to do with morality)?
“Agents capable of being moral” corresponds very closely with my intuitive set “agents whose desires we should have some degree of respect for.” Thus, it captures my personal sense of what morality is quite well, though it doesn’t really capture why that’s my sense of it.
Which cluster is that: Agents currently acknowledging a morality similar to yours (with “capacity” referring to their choice on whether or not to act according to those nominal beliefs at any give time)? Agents who would be moved by your moral arguments (even if those arguments haven’t yet been presented to them)? Anything Turing-complete (even if not currently running an algorithm that has anything to do with morality)?
“Agents capable of being moral” corresponds very closely with my intuitive set “agents whose desires we should have some degree of respect for.” Thus, it captures my personal sense of what morality is quite well, though it doesn’t really capture why that’s my sense of it.