I think most people are fundamentally following ‘knee-jerk-morality’, with the
various (meta)ethical systems as a rationalization. This is evidenced by the fact that answers in the trolley-problem differ when some (in the ethical system) morally-neutral factors changed—for example, whether something happens through action or inaction.
The paper shows that some of the rules of a rationalization of knee-jerk-morality can be encoded in a Prolog program. But if the problem changes a bit (say, the
involuntary-organ-transplant-case), you’ll need extra rules.
A limited number of stable rules are easy to program. However, If you want to mimic a real ‘knee-jerk-moral’ human being, you probably need an AGI—the rules are unclear, unstable and influenced by culture and emotions.
Interesting read!
I think most people are fundamentally following ‘knee-jerk-morality’, with the various (meta)ethical systems as a rationalization. This is evidenced by the fact that answers in the trolley-problem differ when some (in the ethical system) morally-neutral factors changed—for example, whether something happens through action or inaction.
The paper shows that some of the rules of a rationalization of knee-jerk-morality can be encoded in a Prolog program. But if the problem changes a bit (say, the involuntary-organ-transplant-case), you’ll need extra rules.
A limited number of stable rules are easy to program. However, If you want to mimic a real ‘knee-jerk-moral’ human being, you probably need an AGI—the rules are unclear, unstable and influenced by culture and emotions.
edit: cleaned up a bit