More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn’t kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven’t looked closely at any of Portmore’s work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
I’ve never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading.
I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as ‘total utilitarianism’) then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.
More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn’t kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven’t looked closely at any of Portmore’s work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
I’ve never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading.
This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as ‘total utilitarianism’) then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.