The thrust of your argument appears to be that: 1) Trolley problems are idealised 2) Idealisation can be a dark art rhetorical technique in discussion of the real world. 3) Boo trolley problems!
This is strange, this is the second comment that summarized an argument that I’m not actually making, and then argues against the made up summary.
My argument isn’t against idealization—which would be an argument against any sort of generalized hypothetical and against the majority of fiction ever made.
No, my argument is that trolley problems do not map to reality very well, and thus, time spent on them is potentially conducive to sloppy thinking. The four problems I listed were perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options—these all lead to a lower quality of thinking than a better constructed question would.
There’s a host of real world, realistic dilemmas you could use in place of a (flawed) trolley problem. Layoffs/redundancies to try to make a company more profitable or keep the ship running as is (like Jack Welch at GE), military problems like fighting a retreating defensive action, policing problems like profiling, what burden of proof in a courtroom, a doctor getting asked for performance enhancing drugs with potentially fatal consequences… there’s plenty of real world, reality-based situations to use for dilemmas, and we would be better off for using them.
I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints.
Which is to say they are idealised problems; they are trued dilemmas. Your remaining argument is fully general against any idealisation or truing of a problem that can also be used rhetorically. This is (I think) what Tordmor’s summary is getting at; mine is doing the same.
Now, I think that’s bad. Agree/disagree there?
So, I clearly disagree, and further you fail to actually establish this “badness”. It is not problematic to think about simplified problems. The trolley problems demonstrate that instinctual ethics are sensitive to whether you have to “act” in some sense. I consider that a bug. The problem is that finding these bugs is harder in “real world” situations; people can avoid the actual point of the dilemma by appealing for more options.
In the examples you give, there is no similar pair of problems. The point isn’t the utilitarianism in a single trolley problem; it’s that when two tracks are replaced by a (canonically larger) person on the bridge and 5 workers further down, people change their answers.
Okay, finally, I think this kind of thinking seeps over into politics, and it’s likewise bad there. Agree/disagree?
You don’t establish this claim (I disagree). It is worth observing that the standard third “trolley” problem is 5 organ recipients and one healthy potential donor for all. The point is to establish that real world situations have more complexity—your four problems.
The point of the trolley problems is to draw attention to the fact that the H.Sap inbuilt ethics is distinctly suboptimal in some circumstances. Your putative “better” dilemmas don’t make that clear. Failing to note and account for these bugs is precisely “sloppy thinking”. Being inconsistent in action on the basis of the varying descriptions of identical situations seems to be “sloppy thinking”. Failing on Newcomb’s problem is “sloppy thinking”. Taking an “Activists” hypothetical as a true description of the world is “sloppy thinking”. Knowing that the hardware you use is buggy? Not so much.
If the mistaken summaries are similar to each other
Nah, they were totally different summaries. Both used words I didn’t say and that don’t map at all to arguments I made… it’s like they read something that’s not there.
this may mean that the post did not get across the point you wanted it to get across.
That, or people mis-summarizing for argument’s sake?
Either way, it’s up to me to get the point across clearly. I thought this was a fairly simple, straightforward post, but apparently not.
This is strange, this is the second comment that summarized an argument that I’m not actually making, and then argues against the made up summary.
My argument isn’t against idealization—which would be an argument against any sort of generalized hypothetical and against the majority of fiction ever made.
No, my argument is that trolley problems do not map to reality very well, and thus, time spent on them is potentially conducive to sloppy thinking. The four problems I listed were perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options—these all lead to a lower quality of thinking than a better constructed question would.
There’s a host of real world, realistic dilemmas you could use in place of a (flawed) trolley problem. Layoffs/redundancies to try to make a company more profitable or keep the ship running as is (like Jack Welch at GE), military problems like fighting a retreating defensive action, policing problems like profiling, what burden of proof in a courtroom, a doctor getting asked for performance enhancing drugs with potentially fatal consequences… there’s plenty of real world, reality-based situations to use for dilemmas, and we would be better off for using them.
From your own summary:
Which is to say they are idealised problems; they are trued dilemmas. Your remaining argument is fully general against any idealisation or truing of a problem that can also be used rhetorically. This is (I think) what Tordmor’s summary is getting at; mine is doing the same.
So, I clearly disagree, and further you fail to actually establish this “badness”. It is not problematic to think about simplified problems. The trolley problems demonstrate that instinctual ethics are sensitive to whether you have to “act” in some sense. I consider that a bug. The problem is that finding these bugs is harder in “real world” situations; people can avoid the actual point of the dilemma by appealing for more options.
In the examples you give, there is no similar pair of problems. The point isn’t the utilitarianism in a single trolley problem; it’s that when two tracks are replaced by a (canonically larger) person on the bridge and 5 workers further down, people change their answers.
You don’t establish this claim (I disagree). It is worth observing that the standard third “trolley” problem is 5 organ recipients and one healthy potential donor for all. The point is to establish that real world situations have more complexity—your four problems.
The point of the trolley problems is to draw attention to the fact that the H.Sap inbuilt ethics is distinctly suboptimal in some circumstances. Your putative “better” dilemmas don’t make that clear. Failing to note and account for these bugs is precisely “sloppy thinking”. Being inconsistent in action on the basis of the varying descriptions of identical situations seems to be “sloppy thinking”. Failing on Newcomb’s problem is “sloppy thinking”. Taking an “Activists” hypothetical as a true description of the world is “sloppy thinking”. Knowing that the hardware you use is buggy? Not so much.
If the mistaken summaries are similar to each other, this may mean that the post did not get across the point you wanted it to get across.
Nah, they were totally different summaries. Both used words I didn’t say and that don’t map at all to arguments I made… it’s like they read something that’s not there.
That, or people mis-summarizing for argument’s sake?
Either way, it’s up to me to get the point across clearly. I thought this was a fairly simple, straightforward post, but apparently not.