It is fair to observe that when somebody claims that their utility function says one thing but their deontology prevents them from following up, that is at least suspicious for one or the other being not-fully-motivating, not-fully-thought-out, etc.
I agree, but deontology is well-known to be a problematic but widely-held philosophy, which should explain away the observed inconsistency (e.g. desires could be consistent but deontology prevents the desires from being acted upon). I think that the proposed alternate test of asking about slowing down longevity research should reveal whether there is a further inconsistency within the desires themselves.
(e.g. desires could be consistent but deontology prevents the desires from being acted upon)
The question is why the deontological concerns are motivating. If they are motivating though a desire to fulfill deontological concern, then they belong in the utility function. And if not through desire, then how? An endorsed deontological principle might say ‘X!’ or ‘Don’t X’, but why obey it? Deontological principles aren’t obviously intrinsically motivating (in the way anything desired is).
It is fair to observe that when somebody claims that their utility function says one thing but their deontology prevents them from following up, that is at least suspicious for one or the other being not-fully-motivating, not-fully-thought-out, etc.
I agree, but deontology is well-known to be a problematic but widely-held philosophy, which should explain away the observed inconsistency (e.g. desires could be consistent but deontology prevents the desires from being acted upon). I think that the proposed alternate test of asking about slowing down longevity research should reveal whether there is a further inconsistency within the desires themselves.
The question is why the deontological concerns are motivating. If they are motivating though a desire to fulfill deontological concern, then they belong in the utility function. And if not through desire, then how? An endorsed deontological principle might say ‘X!’ or ‘Don’t X’, but why obey it? Deontological principles aren’t obviously intrinsically motivating (in the way anything desired is).
Following deontological concerns can be instrumentally useful for biased finite agents: http://wiki.lesswrong.com/wiki/Ethical_injunction