6: In the trolley problem, a deontologist wouldn’t push decide to push the man, so the pseudo-fat man’s life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Maybe you missed the point of my comment. (Maybe I’m missing my own point; can’t tell right now, too sleepy) Anyway, here’s what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they’re lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Fair point, I didn’t see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn’t clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
That’s an argument that only appeals to the consequentalist.
I’m not sure that’s true. Forms of deontology will usually have some sort of theory of value that allows for a ‘better world’, though it’s usually tied up with weird metaphysical views that don’t jive well with consequentialism.
You’re right, it’s pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don’t even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn’t to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.
6: In the trolley problem, a deontologist wouldn’t push decide to push the man, so the pseudo-fat man’s life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Maybe you missed the point of my comment. (Maybe I’m missing my own point; can’t tell right now, too sleepy) Anyway, here’s what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they’re lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Fair point, I didn’t see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn’t clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
That’s an argument that only appeals to the consequentalist.
Of course. I am only arguing that consequentialists want to be consequentialists, despite cousin_it’s scenario #6.
I’m not sure that’s true. Forms of deontology will usually have some sort of theory of value that allows for a ‘better world’, though it’s usually tied up with weird metaphysical views that don’t jive well with consequentialism.
You’re right, it’s pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don’t even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn’t to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.