Eliezer: that’s a good point as far as it goes. But the answer many contemporary deontologists would give is that you can’t expect to be able to computationally cash out all decision problems, and particularly moral decision problems. (Who said morality was easy?) In hard cases, it seems to me that the most plausible principles of morality don’t provide cookie-cutter determinate answers. What fills in the void? Several things kick in under various versions of various theories. For example, some duties are understood as optional rather than necessary, which gives the agent enough room to make either decision (as long as s/he’s acting from moral motivations). Similarly, some decisions can be made by the agent’s consideration of their own character—what sort of a person am I? What are my values? Those sorts of things can often fill in the gaps in practical reason left by non-computational moral theories.
Eliezer: that’s a good point as far as it goes. But the answer many contemporary deontologists would give is that you can’t expect to be able to computationally cash out all decision problems, and particularly moral decision problems. (Who said morality was easy?) In hard cases, it seems to me that the most plausible principles of morality don’t provide cookie-cutter determinate answers. What fills in the void? Several things kick in under various versions of various theories. For example, some duties are understood as optional rather than necessary, which gives the agent enough room to make either decision (as long as s/he’s acting from moral motivations). Similarly, some decisions can be made by the agent’s consideration of their own character—what sort of a person am I? What are my values? Those sorts of things can often fill in the gaps in practical reason left by non-computational moral theories.