That conversation yields a lot more intellectual value; it trains you to think creatively and explore all possible solutions, rather than devise a single heuristic that is only applicable in a 5d-corner case.
I don’t think I could disagree more.
The point of ethical thought experiments like the sick villager problem is not to present a practical scenario to be solved; it’s to illuminate seeming contradictions in our values. Yes, a lot of them have some holes—where did the utility monster come from? Are there more of them with different preferences? Is it possible to make it happy by feeding it Tofumans? -- but spending a lot of time plugging them only distracts from the exercise. The reason they appear to be extreme corner cases is that only there does the contradiction show up without complications; but that doesn’t mean they’re not worth addressing on their own terms, the better to come up with a set of principles—not a narrow heuristic—that could be applied in real life. The LCPW, therefore, is a gentle reminder to think in those terms.
Training yourself to think creatively has value, of course; if you’re actually faced with sick villagers that you could save by chopping up healthy patients for their organs, you should by all means consider alternative solutions and have trained yourself for doing so. But this thought experiment isn’t aimed at a rural surgeon with an itchy bonesaw; it’s aimed at philosophers of ethics, or at armchair hobbyists of same. If you are such a person and you’re faced with something like that as a hypothetical, and you can’t prove that not only that hypothetical but no analogous situation will ever come up, engage with the apparent ethical contradiction or show that there isn’t one; don’t poke holes in the practical aspects of the hypothetical and congratulate yourself for it. That’s like a physicist saying “well, I can’t actually ride a beam of light, so obviously it’s not worth thinking about”.
Though you could decide the whole genre isn’t worth your time. That’s fine too. Philosophy of ethics is a little abstract for most people’s taste, including mine.
The point of ethical thought experiments like the sick villager problem is.… to illuminate seeming contradictions in our values.
That’s fair. I understand the value: it exposes the weakness of using overly rigid heuristics by presenting a situation where those heuristics feel wrong. And I agree that it’s an evasion to nitpick the thought experiment in an attempt to avoid having to face the contradiction of your poorly-formed heuristics.
My standard response to thought-experiment questions is: “I would do everything possible to have my cake and eat it too.” In many cases, that response satisfies whoever asked the question. Immediately defaulting to the LCPW is putting words into the other person’s mouth by assuming they wouldn’t be satisfied with that ethical approach.
Making the LCPW something “normal” seriously underestimates how world-bendingly different it would be from reality. If we truly lived in the LCPW, in most cases it would be such a different reality from the one we exist in that it would require a completely different set of ethics, and I just haven’t really thought hard enough about it to generate a new system of ethics for each tailor-made LCPW.
Incidentally I don’t have a problem with the LCPW when it’s actually realistic, as is the case with the “charity” example.
I don’t think I could disagree more.
The point of ethical thought experiments like the sick villager problem is not to present a practical scenario to be solved; it’s to illuminate seeming contradictions in our values. Yes, a lot of them have some holes—where did the utility monster come from? Are there more of them with different preferences? Is it possible to make it happy by feeding it Tofumans? -- but spending a lot of time plugging them only distracts from the exercise. The reason they appear to be extreme corner cases is that only there does the contradiction show up without complications; but that doesn’t mean they’re not worth addressing on their own terms, the better to come up with a set of principles—not a narrow heuristic—that could be applied in real life. The LCPW, therefore, is a gentle reminder to think in those terms.
Training yourself to think creatively has value, of course; if you’re actually faced with sick villagers that you could save by chopping up healthy patients for their organs, you should by all means consider alternative solutions and have trained yourself for doing so. But this thought experiment isn’t aimed at a rural surgeon with an itchy bonesaw; it’s aimed at philosophers of ethics, or at armchair hobbyists of same. If you are such a person and you’re faced with something like that as a hypothetical, and you can’t prove that not only that hypothetical but no analogous situation will ever come up, engage with the apparent ethical contradiction or show that there isn’t one; don’t poke holes in the practical aspects of the hypothetical and congratulate yourself for it. That’s like a physicist saying “well, I can’t actually ride a beam of light, so obviously it’s not worth thinking about”.
Though you could decide the whole genre isn’t worth your time. That’s fine too. Philosophy of ethics is a little abstract for most people’s taste, including mine.
That’s fair. I understand the value: it exposes the weakness of using overly rigid heuristics by presenting a situation where those heuristics feel wrong. And I agree that it’s an evasion to nitpick the thought experiment in an attempt to avoid having to face the contradiction of your poorly-formed heuristics.
My standard response to thought-experiment questions is: “I would do everything possible to have my cake and eat it too.” In many cases, that response satisfies whoever asked the question. Immediately defaulting to the LCPW is putting words into the other person’s mouth by assuming they wouldn’t be satisfied with that ethical approach.
Making the LCPW something “normal” seriously underestimates how world-bendingly different it would be from reality. If we truly lived in the LCPW, in most cases it would be such a different reality from the one we exist in that it would require a completely different set of ethics, and I just haven’t really thought hard enough about it to generate a new system of ethics for each tailor-made LCPW.
Incidentally I don’t have a problem with the LCPW when it’s actually realistic, as is the case with the “charity” example.