2- A fairly common deontological rule is “Don’t murder an innocent, no matter how great the benefit.” Take the following scenario:
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B’s own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity’s sake.
Your conversion would have “Killing innocents intentionally” as an evil, and thus A would be obliged to kill the innocent.
No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.
2- A fairly common deontological rule is “Don’t murder an innocent, no matter how great the benefit.” Take the following scenario:
This is not a counter-example. It doesn’t even seem to be an especially difficult scenario. I’m confused.
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B’s own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity’s sake.
Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision “intentionally kill innocent” at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).
Your conversion would have “Killing innocents intentionally” as an evil, and thus A would be obliged to kill the innocent.
No, that would be a silly conversion. If A is a deontological agent that adheres to the rule “never kill innocents intentionally’ then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn’t. It doesn’t kill B.
I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn’t. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.
You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say “What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?”. We’re just considering that viewpoint in the form of the utility function it would take to make it happen.
More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn’t kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven’t looked closely at any of Portmore’s work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
I’ve never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading.
I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as ‘total utilitarianism’) then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.
1- Which is by definition not deontological.
2- A fairly common deontological rule is “Don’t murder an innocent, no matter how great the benefit.” Take the following scenario:
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B’s own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity’s sake.
Your conversion would have “Killing innocents intentionally” as an evil, and thus A would be obliged to kill the innocent.
No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.
This is not a counter-example. It doesn’t even seem to be an especially difficult scenario. I’m confused.
Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision “intentionally kill innocent” at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).
No, that would be a silly conversion. If A is a deontological agent that adheres to the rule “never kill innocents intentionally’ then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn’t. It doesn’t kill B.
I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn’t. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.
You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say “What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?”. We’re just considering that viewpoint in the form of the utility function it would take to make it happen.
Alright- conceded.
More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn’t kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven’t looked closely at any of Portmore’s work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
I’ve never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading.
This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as ‘total utilitarianism’) then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.