To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances. I’ve also never had a simple and consistent deontological system lined out for me that didn’t suffer the same flaws.
So I guess what I’m really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that “feel right” and I’m curious if/why OP thinks the heuristic is bad.
To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances.
Not sure what this means.
I’ve also never had a simple and consistent deontological system lined out for me that didn’t suffer the same flaws.
Nor have I. My guess is that simple and consistent is too much to ask of any moral theory.
So I guess what I’m really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that “feel right” and I’m curious if/why OP thinks the heuristic is bad.
It is definitely a nice heuristic. I don’t know what OP thinks but a lot of people here take it to be the answer, instead of just a heuristic. That may be the target of the objection.
“Exposed to these situations” means to say that when someone asks about utilitarianism they say, “if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?” To which my reply is, “When does that ever happen and how does answering that question help me be more ethical?”
Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel’s Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?
Please don’t throw around Gödel’s Theorem before you’ve really understood it— that’s one thing that makes people look like cranks!
“When does that ever happen and how does answering that question help me be more ethical?”
Very rarely; but pondering such hypotheticals has helped me to see what some of my actual moral intuitions are, once they are stripped of rationalizations (and chances to dodge the question). From that point on, I can reflect on them more effectively.
Sorry to sound crankish. Rather than “simple and inconsistent” I might have said that there were contrived and thus unanswerable questions. Regardless it distracted and I shouldn’t have digressed at all.
Anyway thank you for the good answer concerning hypotheticals.
“Exposed to these situations” means to say that when someone asks about utilitarianism they say, “if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?” To which my reply is, “When does that ever happen and how does answering that question help me be more ethical?”
These thought experiments aren’t supposed to make you more ethical, they’re supposed to help us understand our morality. If you think there are regularities in ethics- general rules that apply to multiple situations then it helps to concoct scenarios to see how those rules function. Often they’re contrived because they are experiments, set up to see how the introduction of a moral principle affects our intuitions. In natural science experimental conditions usually have to be concocted as well. You don’t usually find two population groups for whom everything is the same except for one variable, for example.
Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel’s Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?
Agree with orthonormal. Not sure what this would mean. I don’t think Godel even does that for arithmetic—arithmetic is simple (though not trivial) and consistent, it just isn’t complete. I have no idea if ethics could be a complete axiomatic system, I haven’t done much on completeness beyond predicate calculus and Godel is still a little over my head.
I just mean that any simple set of principles will have to be applied inconsistently to match our intuitions. This, on moral particularism, is relevant.
To the extent that I have been exposed to these types of situations, it seems that the contradictions stem from contrived circumstances. I’ve also never had a simple and consistent deontological system lined out for me that didn’t suffer the same flaws.
So I guess what I’m really getting at is that I see utilitarianism as a good heuristic for matching up circumstances with judgments that “feel right” and I’m curious if/why OP thinks the heuristic is bad.
Not sure what this means.
Nor have I. My guess is that simple and consistent is too much to ask of any moral theory.
It is definitely a nice heuristic. I don’t know what OP thinks but a lot of people here take it to be the answer, instead of just a heuristic. That may be the target of the objection.
“Exposed to these situations” means to say that when someone asks about utilitarianism they say, “if there was a fat man in front of a train filled with single parents and you could push him out of the way or let the train run off a cliff what would you do?” To which my reply is, “When does that ever happen and how does answering that question help me be more ethical?”
Digression: if a decision-theoretic model was translated into a set of axiomatic behaviors could you potentially apply Godel’s Incompleteness Theorem to prove that simple and consistent is in fact too much to ask?
Please don’t throw around Gödel’s Theorem before you’ve really understood it— that’s one thing that makes people look like cranks!
Very rarely; but pondering such hypotheticals has helped me to see what some of my actual moral intuitions are, once they are stripped of rationalizations (and chances to dodge the question). From that point on, I can reflect on them more effectively.
Sorry to sound crankish. Rather than “simple and inconsistent” I might have said that there were contrived and thus unanswerable questions. Regardless it distracted and I shouldn’t have digressed at all.
Anyway thank you for the good answer concerning hypotheticals.
These thought experiments aren’t supposed to make you more ethical, they’re supposed to help us understand our morality. If you think there are regularities in ethics- general rules that apply to multiple situations then it helps to concoct scenarios to see how those rules function. Often they’re contrived because they are experiments, set up to see how the introduction of a moral principle affects our intuitions. In natural science experimental conditions usually have to be concocted as well. You don’t usually find two population groups for whom everything is the same except for one variable, for example.
Agree with orthonormal. Not sure what this would mean. I don’t think Godel even does that for arithmetic—arithmetic is simple (though not trivial) and consistent, it just isn’t complete. I have no idea if ethics could be a complete axiomatic system, I haven’t done much on completeness beyond predicate calculus and Godel is still a little over my head.
I just mean that any simple set of principles will have to be applied inconsistently to match our intuitions. This, on moral particularism, is relevant.
I didn’t use “consistence” very rigorously here, I more meant that even if a principle matched our intuitions there would be unanswerable questions.
Regardless, good answer. The link seems to be broken for me, though.
Link is working fine for me. It is also the first google result for “moral particularism”, so you can get there that way.
Tried that and it gave me the same broken site. It works now.
Why on Earth was this downvoted?