The kind of thought experiments (I think) Matt is referring to are so basic I don’t know of any papers that go into them in depth. They get discussed in intro level ethics courses. For example: A white woman is raped and murdered in segregation era deep south. Witnesses say the culprit was black. Tensions are high and there is a high likelihood race riots break out and whites just start killing blacks. Hundreds will die unless the culprit is found and convicted quickly. There are no leads but as police chief/attorney/governor you can frame an innocent man to charge and convict quickly. Both sum and average utilitarianism suggest you should.
Same goes for pushing fat people in front of runaway trolleys and carving up homeless people for their organs.
Utilitarianism means biting all these bullets or else accepting these as proofs by reductio.
Edit: Or structuring/defining utilitarianism in a way that avoids these issues. But it is harder than it looks.
I’m comfortable positing things about these scenarios such that there are no larger consequences of these courses of action- no one finds out, no norms are set etc.
I do suspect an unusually high number of people here will want to bite the bullet.(Interesting side effect of making philosophical thought experiments hilarious: it can be hard to tell if someone is kidding about them) But it seems well worth keeping in mind that the vast majority would find a world governed by the typical forms of utilitarianism to be highly immoral.
These are not realistic scenarios as painted. In order to be able to actually imagine what really might be the right thing to do if a scenario fitting these very alien conditions arose, you’ll have to paint a lot more of the picture, and it might leave our intuitions about what was right in that scenario looking very different.
They’re not realistic because they’re designed to isolate the relevant intuitions from the noise. Being suspicious of our intuitions about fictional scenarios is fine- but I don’t think that lets you get away without updating. These scenarios are easy to generate and have several features in common. I don’t expect anyone to give up their utilitarianism on the basis of the above comment—but a little more skepticism would be good.
I’m happy to accept whatever trolley problem you care to suggest. Those are artificial but there’s no conceptual problem with setting them up in today’s world—you just put the actors and rails and levers in the right places and you’re set. But to set up a situation where hundreds will die in this possible riot, and yet it it certain that no-one will find out and no norms will be set if you frame the guy—that’s just no longer a problem set in a world anything like our world, and I’d need to know a lot more about this weird proposed world before I was prepared to say what the right thing to do in it might be.
To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances. I’ve also never had a simple and consistent deontological system lined out for me that didn’t suffer the same flaws.
So I guess what I’m really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that “feel right” and I’m curious if/why OP thinks the heuristic is bad.
To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances.
Not sure what this means.
I’ve also never had a simple and consistent deontological system lined out for me that didn’t suffer the same flaws.
Nor have I. My guess is that simple and consistent is too much to ask of any moral theory.
So I guess what I’m really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that “feel right” and I’m curious if/why OP thinks the heuristic is bad.
It is definitely a nice heuristic. I don’t know what OP thinks but a lot of people here take it to be the answer, instead of just a heuristic. That may be the target of the objection.
“Exposed to these situations” means to say that when someone asks about utilitarianism they say, “if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?” To which my reply is, “When does that ever happen and how does answering that question help me be more ethical?”
Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel’s Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?
Please don’t throw around Gödel’s Theorem before you’ve really understood it— that’s one thing that makes people look like cranks!
“When does that ever happen and how does answering that question help me be more ethical?”
Very rarely; but pondering such hypotheticals has helped me to see what some of my actual moral intuitions are, once they are stripped of rationalizations (and chances to dodge the question). From that point on, I can reflect on them more effectively.
Sorry to sound crankish. Rather than “simple and inconsistent” I might have said that there were contrived and thus unanswerable questions. Regardless it distracted and I shouldn’t have digressed at all.
Anyway thank you for the good answer concerning hypotheticals.
“Exposed to these situations” means to say that when someone asks about utilitarianism they say, “if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?” To which my reply is, “When does that ever happen and how does answering that question help me be more ethical?”
These thought experiments aren’t supposed to make you more ethical, they’re supposed to help us understand our morality. If you think there are regularities in ethics- general rules that apply to multiple situations then it helps to concoct scenarios to see how those rules function. Often they’re contrived because they are experiments, set up to see how the introduction of a moral principle affects our intuitions. In natural science experimental conditions usually have to be concocted as well. You don’t usually find two population groups for whom everything is the same except for one variable, for example.
Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel’s Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?
Agree with orthonormal. Not sure what this would mean. I don’t think Godel even does that for arithmetic—arithmetic is simple (though not trivial) and consistent, it just isn’t complete. I have no idea if ethics could be a complete axiomatic system, I haven’t done much on completeness beyond predicate calculus and Godel is still a little over my head.
I just mean that any simple set of principles will have to be applied inconsistently to match our intuitions. This, on moral particularism, is relevant.
Can you provide a link to an academic paper or blog post that discusses this in more depth?
The kind of thought experiments (I think) Matt is referring to are so basic I don’t know of any papers that go into them in depth. They get discussed in intro level ethics courses. For example: A white woman is raped and murdered in segregation era deep south. Witnesses say the culprit was black. Tensions are high and there is a high likelihood race riots break out and whites just start killing blacks. Hundreds will die unless the culprit is found and convicted quickly. There are no leads but as police chief/attorney/governor you can frame an innocent man to charge and convict quickly. Both sum and average utilitarianism suggest you should.
Same goes for pushing fat people in front of runaway trolleys and carving up homeless people for their organs.
Utilitarianism means biting all these bullets or else accepting these as proofs by reductio.
Edit: Or structuring/defining utilitarianism in a way that avoids these issues. But it is harder than it looks.
Or seeing the larger consequences of any of these courses of action.
(Well, except for pushing the fat man in front of the trolley, which I largely favour.)
I’m comfortable positing things about these scenarios such that there are no larger consequences of these courses of action- no one finds out, no norms are set etc.
I do suspect an unusually high number of people here will want to bite the bullet.(Interesting side effect of making philosophical thought experiments hilarious: it can be hard to tell if someone is kidding about them) But it seems well worth keeping in mind that the vast majority would find a world governed by the typical forms of utilitarianism to be highly immoral.
These are not realistic scenarios as painted. In order to be able to actually imagine what really might be the right thing to do if a scenario fitting these very alien conditions arose, you’ll have to paint a lot more of the picture, and it might leave our intuitions about what was right in that scenario looking very different.
They’re not realistic because they’re designed to isolate the relevant intuitions from the noise. Being suspicious of our intuitions about fictional scenarios is fine- but I don’t think that lets you get away without updating. These scenarios are easy to generate and have several features in common. I don’t expect anyone to give up their utilitarianism on the basis of the above comment—but a little more skepticism would be good.
I’m happy to accept whatever trolley problem you care to suggest. Those are artificial but there’s no conceptual problem with setting them up in today’s world—you just put the actors and rails and levers in the right places and you’re set. But to set up a situation where hundreds will die in this possible riot, and yet it it certain that no-one will find out and no norms will be set if you frame the guy—that’s just no longer a problem set in a world anything like our world, and I’d need to know a lot more about this weird proposed world before I was prepared to say what the right thing to do in it might be.
To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances. I’ve also never had a simple and consistent deontological system lined out for me that didn’t suffer the same flaws.
So I guess what I’m really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that “feel right” and I’m curious if/why OP thinks the heuristic is bad.
Not sure what this means.
Nor have I. My guess is that simple and consistent is too much to ask of any moral theory.
It is definitely a nice heuristic. I don’t know what OP thinks but a lot of people here take it to be the answer, instead of just a heuristic. That may be the target of the objection.
“Exposed to these situations” means to say that when someone asks about utilitarianism they say, “if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?” To which my reply is, “When does that ever happen and how does answering that question help me be more ethical?”
Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel’s Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?
Please don’t throw around Gödel’s Theorem before you’ve really understood it— that’s one thing that makes people look like cranks!
Very rarely; but pondering such hypotheticals has helped me to see what some of my actual moral intuitions are, once they are stripped of rationalizations (and chances to dodge the question). From that point on, I can reflect on them more effectively.
Sorry to sound crankish. Rather than “simple and inconsistent” I might have said that there were contrived and thus unanswerable questions. Regardless it distracted and I shouldn’t have digressed at all.
Anyway thank you for the good answer concerning hypotheticals.
These thought experiments aren’t supposed to make you more ethical, they’re supposed to help us understand our morality. If you think there are regularities in ethics- general rules that apply to multiple situations then it helps to concoct scenarios to see how those rules function. Often they’re contrived because they are experiments, set up to see how the introduction of a moral principle affects our intuitions. In natural science experimental conditions usually have to be concocted as well. You don’t usually find two population groups for whom everything is the same except for one variable, for example.
Agree with orthonormal. Not sure what this would mean. I don’t think Godel even does that for arithmetic—arithmetic is simple (though not trivial) and consistent, it just isn’t complete. I have no idea if ethics could be a complete axiomatic system, I haven’t done much on completeness beyond predicate calculus and Godel is still a little over my head.
I just mean that any simple set of principles will have to be applied inconsistently to match our intuitions. This, on moral particularism, is relevant.
I didn’t use “consistence” very rigorously here, I more meant that even if a principle matched our intuitions there would be unanswerable questions.
Regardless, good answer. The link seems to be broken for me, though.
Link is working fine for me. It is also the first google result for “moral particularism”, so you can get there that way.
Tried that and it gave me the same broken site. It works now.
Why on Earth was this downvoted?