Deontology, Consequentialism, and Virtue ethics are not opposed, and people who argue about them have different assumptions. Basically:
Totally agree. In fact, I go as far as to declare that Deontologic value systems and Consequentialist systems can be converted between each other (so long as the system of representing consequentialist values is suitably versatile). This isn’t to say such a conversion is always easy and it does rely on reflecting off an epistemic model but it can be done.
To the extent that you are an agent, you are concerned with the consequences of your actions, because you exist to have an effect on the actual world.
I’m not sure this is true. Why can’t we can something that doesn’t care about consequences an agent? Assuming, of course, that they are a suitably advanced and coherent person? Like take a human deontologist who stubbornly sticks to the deontological values and ignores consequences then dismiss as irrelevant that small part of them that feels sad about the consequences. That still seems to deserve being called an agent.
To the extent that you are a person (existing in a society), you should follow rules that forbid murder, lying, and leaving the toolbox in a mess, and compel politeness, helping others, and whatnot. A good person does not make a good agent, because what a person should do (for example, help an injured bird) often makes no sense from a consequentialist POV.
I’d actually say a person shouldn’t help an injured bird. Usually it is better from both an efficiency standpoint and a humanitarian standpoint to just kill it and prevent short term suffering and negligible long term prospects of successfully recovering to functioning in the wild. But part of my intuitive experience here is that my intuitions for what makes a ‘good person’ has been corrupted by my consequentialist values to a greater extent that in has for some others. Sometimes my efforts at social influence and behaviour are governed somewhat more than average by my decision-theory intuitions. For example my ‘should’ advocates lying in some situations where others may say people ‘shouldn’t’ lie (even if they themselves lie hypocritically).
I’m curious Nyan. You’re someone who has developed an interesting philosophy regarding ethics in earlier posts and one that I essentially agree with. I am wondering to what extent your instantiation of ‘should’ makes no sense from a consequentialist POV. Mine mostly makes sense but only once ‘ethical inhibitions’ and consideration of second order and unexpected consequences are accounted for. Some of it also only makes sense in consequentialist frameworks where having a preference for negative consequences to occur in response to certain actions is accepted as a legitimate intrinsic value.
As for helping birds, it depends on the type of injury. If it’s been mauled by a cat, you’re probably right. But if it’s concussed after flying into a wall or window—a very common bird injury—and isn’t dead yet, apparently it has decent odds of full recovery if you discourage it from moving and keep predators away for an hour or few. (The way to discourage a bird from moving and possibly hurting itself is to keep it in a dark confined space such as a shoebox. My roommate used to transport pigeons this way and they really didn’t seem to mind it.)
Regarding the rest of the post, I’ll have to think about it before coming up with a reply.
But if it’s concussed after flying into a wall or window—a very common bird injury—and isn’t dead yet, apparently it has decent odds of full recovery if you discourage it from moving and keep predators away for an hour or few.
Thankyou, I wasn’t sure about that. My sisters and I used to nurse birds like that back to health where possible but I had no idea what the prognosis was. I know that if we found any chicks that were alive but displaced from the nest they were pretty much screwed once we touched them due to contamination with human-smell causing rejection.
More recently (now that I’m in Melbourne rather than on a farm) the only birds that have hit my window have broken their neck and died. They have been larger birds so I assume the mass to neck-strength ratio is more of an issue. For some reason most of the birds here in the city manage to not fly into windows anywhere near as often as the farm birds. I wonder if that is mere happen-stance or micro-evolution at work. Cities have got tons more windows than farmland does after all.
Actually the human-scent claim seems to be a myth. Most birds have a quite poor sense of smell.
Blog post quoting a biologist. Snopes.com confirms. However, unless they’re very young indeed it’s still best to leave them alone:
Possibly this widespread caution against handling young birds springs from a desire to protect them from the many well-intentioned souls who, upon discovering fledglings on the ground, immediately think to cart them away to be cared for. Rather than attempting to impress upon these folks the real reason for leaving well enough alone (that a normal part of most fledglings’ lives is a few days on the ground before they fully master their flying skills), a bit of lore such as this one works to keep many people away from young birds by instilling in them a fear that their actions will doom the little ones to slow starvation. Lore is thus called into service to prevent a harmful act that a rational explanation would be much less effective in stopping.
Oh, we were mislead into taking the correct action. Fair enough I suppose. I had wondered why they were so sensitive and also why the advice was “don’t touch” rather than “put on gloves”. Consider me enlightened.
(Mind you the just so story justifying the myth lacks credibility. It seems more likely that the myth exists for the usual reason myths exist and the positive consequences are pure coincidence. Even so I can take their word for it regarding the observable consequences if not the explanation.)
I can see how to convert a Consequentialist system into a series of Deontological rules with exceptions. However, not all Deontological systems can be converted to Consequentialist systems. Deontological systems usually contain Absolute Moral Wrongs which are not to be done no matter what, even if they will lead to even more Absolute Moral Wrongs.
I can see how to convert a Consequentialist system into a series of Deontological rules with exceptions
In the case of consequentialists that satisfy the VNM axioms (the only interesting kind) they need only one Deontological rule, “Maximise this utility function!”.
However, not all Deontological systems can be converted to Consequentialist systems. Deontological systems usually contain Absolute Moral Wrongs which are not to be done no matter what, even if they will lead to even more Absolute Moral Wrongs.
I suggest that they can. With the caveat that the meaning attributed to the behaviours and motivations will be different, even thought the behaviour decreed by the ethics is identical. It is also worth repeating with emphasis the disclaimer:
This isn’t to say such a conversion is always easy and it does rely on reflecting off an epistemic model but it can be done.
The requirement for the epistemic model is particularly critical to the process of constructing the emulation in that direction. It becomes relatively easy (to conceive, not to do) if you use an evaluation system that is compatible with infinitesimals. If infinitesimals are prohibited (I don’t see why someone would prohibit that aspect of mathematics) then it becomes somewhat harder to create a perfect emulation.
Of course the above applies when assuming those VNM axioms once again. Throw those away and emulating the deontological system reverts to being utterly trivial. The easiest translation from deontological rules to a vnm-free consequentialst system would be a simple enumeration and ranking of possible permutations. The output consequences ranking system would be inefficient and “NP-enormous” but the proof-of-concept translation algorithm would be simple. Extreme optimisations are almost certainly possible.
2- A fairly common deontological rule is “Don’t murder an innocent, no matter how great the benefit.” Take the following scenario:
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B’s own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity’s sake.
Your conversion would have “Killing innocents intentionally” as an evil, and thus A would be obliged to kill the innocent.
No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.
2- A fairly common deontological rule is “Don’t murder an innocent, no matter how great the benefit.” Take the following scenario:
This is not a counter-example. It doesn’t even seem to be an especially difficult scenario. I’m confused.
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B’s own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity’s sake.
Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision “intentionally kill innocent” at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).
Your conversion would have “Killing innocents intentionally” as an evil, and thus A would be obliged to kill the innocent.
No, that would be a silly conversion. If A is a deontological agent that adheres to the rule “never kill innocents intentionally’ then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn’t. It doesn’t kill B.
I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn’t. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.
You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say “What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?”. We’re just considering that viewpoint in the form of the utility function it would take to make it happen.
More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn’t kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven’t looked closely at any of Portmore’s work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
I’ve never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading.
I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as ‘total utilitarianism’) then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.
Totally agree. In fact, I go as far as to declare that Deontologic value systems and Consequentialist systems can be converted between each other (so long as the system of representing consequentialist values is suitably versatile). This isn’t to say such a conversion is always easy and it does rely on reflecting off an epistemic model but it can be done.
I’m not sure this is true. Why can’t we can something that doesn’t care about consequences an agent? Assuming, of course, that they are a suitably advanced and coherent person? Like take a human deontologist who stubbornly sticks to the deontological values and ignores consequences then dismiss as irrelevant that small part of them that feels sad about the consequences. That still seems to deserve being called an agent.
I’d actually say a person shouldn’t help an injured bird. Usually it is better from both an efficiency standpoint and a humanitarian standpoint to just kill it and prevent short term suffering and negligible long term prospects of successfully recovering to functioning in the wild. But part of my intuitive experience here is that my intuitions for what makes a ‘good person’ has been corrupted by my consequentialist values to a greater extent that in has for some others. Sometimes my efforts at social influence and behaviour are governed somewhat more than average by my decision-theory intuitions. For example my ‘should’ advocates lying in some situations where others may say people ‘shouldn’t’ lie (even if they themselves lie hypocritically).
I’m curious Nyan. You’re someone who has developed an interesting philosophy regarding ethics in earlier posts and one that I essentially agree with. I am wondering to what extent your instantiation of ‘should’ makes no sense from a consequentialist POV. Mine mostly makes sense but only once ‘ethical inhibitions’ and consideration of second order and unexpected consequences are accounted for. Some of it also only makes sense in consequentialist frameworks where having a preference for negative consequences to occur in response to certain actions is accepted as a legitimate intrinsic value.
As for helping birds, it depends on the type of injury. If it’s been mauled by a cat, you’re probably right. But if it’s concussed after flying into a wall or window—a very common bird injury—and isn’t dead yet, apparently it has decent odds of full recovery if you discourage it from moving and keep predators away for an hour or few. (The way to discourage a bird from moving and possibly hurting itself is to keep it in a dark confined space such as a shoebox. My roommate used to transport pigeons this way and they really didn’t seem to mind it.)
Regarding the rest of the post, I’ll have to think about it before coming up with a reply.
Thankyou, I wasn’t sure about that. My sisters and I used to nurse birds like that back to health where possible but I had no idea what the prognosis was. I know that if we found any chicks that were alive but displaced from the nest they were pretty much screwed once we touched them due to contamination with human-smell causing rejection.
More recently (now that I’m in Melbourne rather than on a farm) the only birds that have hit my window have broken their neck and died. They have been larger birds so I assume the mass to neck-strength ratio is more of an issue. For some reason most of the birds here in the city manage to not fly into windows anywhere near as often as the farm birds. I wonder if that is mere happen-stance or micro-evolution at work. Cities have got tons more windows than farmland does after all.
Actually the human-scent claim seems to be a myth. Most birds have a quite poor sense of smell. Blog post quoting a biologist. Snopes.com confirms. However, unless they’re very young indeed it’s still best to leave them alone:
Oh, we were mislead into taking the correct action. Fair enough I suppose. I had wondered why they were so sensitive and also why the advice was “don’t touch” rather than “put on gloves”. Consider me enlightened.
(Mind you the just so story justifying the myth lacks credibility. It seems more likely that the myth exists for the usual reason myths exist and the positive consequences are pure coincidence. Even so I can take their word for it regarding the observable consequences if not the explanation.)
I can see how to convert a Consequentialist system into a series of Deontological rules with exceptions. However, not all Deontological systems can be converted to Consequentialist systems. Deontological systems usually contain Absolute Moral Wrongs which are not to be done no matter what, even if they will lead to even more Absolute Moral Wrongs.
In the case of consequentialists that satisfy the VNM axioms (the only interesting kind) they need only one Deontological rule, “Maximise this utility function!”.
I suggest that they can. With the caveat that the meaning attributed to the behaviours and motivations will be different, even thought the behaviour decreed by the ethics is identical. It is also worth repeating with emphasis the disclaimer:
The requirement for the epistemic model is particularly critical to the process of constructing the emulation in that direction. It becomes relatively easy (to conceive, not to do) if you use an evaluation system that is compatible with infinitesimals. If infinitesimals are prohibited (I don’t see why someone would prohibit that aspect of mathematics) then it becomes somewhat harder to create a perfect emulation.
Of course the above applies when assuming those VNM axioms once again. Throw those away and emulating the deontological system reverts to being utterly trivial. The easiest translation from deontological rules to a vnm-free consequentialst system would be a simple enumeration and ranking of possible permutations. The output consequences ranking system would be inefficient and “NP-enormous” but the proof-of-concept translation algorithm would be simple. Extreme optimisations are almost certainly possible.
1- Which is by definition not deontological.
2- A fairly common deontological rule is “Don’t murder an innocent, no matter how great the benefit.” Take the following scenario:
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B’s own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity’s sake.
Your conversion would have “Killing innocents intentionally” as an evil, and thus A would be obliged to kill the innocent.
No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.
This is not a counter-example. It doesn’t even seem to be an especially difficult scenario. I’m confused.
Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision “intentionally kill innocent” at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).
No, that would be a silly conversion. If A is a deontological agent that adheres to the rule “never kill innocents intentionally’ then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn’t. It doesn’t kill B.
I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn’t. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.
You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say “What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?”. We’re just considering that viewpoint in the form of the utility function it would take to make it happen.
Alright- conceded.
More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn’t kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven’t looked closely at any of Portmore’s work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don’t know if he still thinks that.
I’ve never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading.
This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as ‘total utilitarianism’) then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.