I find this method to be intellectually dangerous.
We do not live in the LCPW, and constantly considering ethical problems as if we do is a mind-killer. It trains the brain to stop looking for creative solutions to intractable real world problems and instead focus on rigid abstract solutions to conceptual problems.
I agree that there is a small modicum of value to considering the LCPW. Just like there’s a small modicum of value to eating a pound of butter for dinner. It’s just, there are a lot better ways to spend ones time. The proper response to “Well, what about the LPCW?” is, “How do you know we are in the LCPW?” I think there is far more value in having a conversation that explores our assumptions about a difficult problem rather than indulges them.
Q: Consider the Sick Villager problem. How do we know that the patients won’t die due to transplant rejection?
A: Oh, well, Omega says so.
Q: Okay. So, how do we know Omega is right?
A: Because Omega is omniscient.
Q: If Omega is omniscient, why can’t it tell us how to grow working organs without need for human sacrifice?
A: Because there are limits to how much it knows.
Q: Ok, so if I knew in advance that Omega is omniscient but has these limitations, why on earth am I working in a village helping 10 villagers instead of working on advancing Omega to the point where it doesnt have those limitations. (And if I don’t know this in advance, why would I suddenly start believing some random computer that claims it is Omega?)
A: I don’t know, because it’s the LCPW.
That conversation yields a lot more intellectual value; it trains you to think creatively and explore all possible solutions, rather than devise a single heuristic that is only applicable in a 5d-corner case. As I indicated above, it can actually be dangerous because novice rationalists may feel compelled to apply that narrow heuristic to situations where a more optimal, creative solution is present.
That conversation yields a lot more intellectual value; it trains you to think creatively and explore all possible solutions, rather than devise a single heuristic that is only applicable in a 5d-corner case.
I don’t think I could disagree more.
The point of ethical thought experiments like the sick villager problem is not to present a practical scenario to be solved; it’s to illuminate seeming contradictions in our values. Yes, a lot of them have some holes—where did the utility monster come from? Are there more of them with different preferences? Is it possible to make it happy by feeding it Tofumans? -- but spending a lot of time plugging them only distracts from the exercise. The reason they appear to be extreme corner cases is that only there does the contradiction show up without complications; but that doesn’t mean they’re not worth addressing on their own terms, the better to come up with a set of principles—not a narrow heuristic—that could be applied in real life. The LCPW, therefore, is a gentle reminder to think in those terms.
Training yourself to think creatively has value, of course; if you’re actually faced with sick villagers that you could save by chopping up healthy patients for their organs, you should by all means consider alternative solutions and have trained yourself for doing so. But this thought experiment isn’t aimed at a rural surgeon with an itchy bonesaw; it’s aimed at philosophers of ethics, or at armchair hobbyists of same. If you are such a person and you’re faced with something like that as a hypothetical, and you can’t prove that not only that hypothetical but no analogous situation will ever come up, engage with the apparent ethical contradiction or show that there isn’t one; don’t poke holes in the practical aspects of the hypothetical and congratulate yourself for it. That’s like a physicist saying “well, I can’t actually ride a beam of light, so obviously it’s not worth thinking about”.
Though you could decide the whole genre isn’t worth your time. That’s fine too. Philosophy of ethics is a little abstract for most people’s taste, including mine.
The point of ethical thought experiments like the sick villager problem is.… to illuminate seeming contradictions in our values.
That’s fair. I understand the value: it exposes the weakness of using overly rigid heuristics by presenting a situation where those heuristics feel wrong. And I agree that it’s an evasion to nitpick the thought experiment in an attempt to avoid having to face the contradiction of your poorly-formed heuristics.
My standard response to thought-experiment questions is: “I would do everything possible to have my cake and eat it too.” In many cases, that response satisfies whoever asked the question. Immediately defaulting to the LCPW is putting words into the other person’s mouth by assuming they wouldn’t be satisfied with that ethical approach.
Making the LCPW something “normal” seriously underestimates how world-bendingly different it would be from reality. If we truly lived in the LCPW, in most cases it would be such a different reality from the one we exist in that it would require a completely different set of ethics, and I just haven’t really thought hard enough about it to generate a new system of ethics for each tailor-made LCPW.
Incidentally I don’t have a problem with the LCPW when it’s actually realistic, as is the case with the “charity” example.
I agree that insisting on assuming the LCPW is a lousy strategic approach to most real-world situations, which (as you say) don’t normally occur in the LCPW. And I agree that assuming it as an analytical stance towards hypothetical situations increases the chance that I’ll adopt it as a strategic approach to a real-world situation, and therefore has some non-zero cost.
That said, I would also say that a one-sided motivated “exploration” of a situation which happens to align with our pre-existing biases about that situation has some non-zero cost, for basically the same reasons.
The OP starts out by objecting to that sort of motivated cognition, and proposes the LCPW strategy as a way of countering it. You object to the LCPW strategy, but seem to disregard the initial problem of motivated cognition completely.
I suspect that in most cases, the impulse to challenge the assumptions of hypothetical situations does more harm than good, since motivated cognition is pretty pervasive among humans.
But, sure, in the rare case of a thinker who really does “explore assumptions about a difficult problem” rather than simply evade them, I agree that it’s an exercise that does more harm than good.
If you are such a thinker and primarily engage with such thinkers, that’s awesome; whatever you’re doing, keep it up!
I find this method to be intellectually dangerous.
We do not live in the LCPW, and constantly considering ethical problems as if we do is a mind-killer. It trains the brain to stop looking for creative solutions to intractable real world problems and instead focus on rigid abstract solutions to conceptual problems.
I agree that there is a small modicum of value to considering the LCPW. Just like there’s a small modicum of value to eating a pound of butter for dinner. It’s just, there are a lot better ways to spend ones time. The proper response to “Well, what about the LPCW?” is, “How do you know we are in the LCPW?” I think there is far more value in having a conversation that explores our assumptions about a difficult problem rather than indulges them.
Q: Consider the Sick Villager problem. How do we know that the patients won’t die due to transplant rejection? A: Oh, well, Omega says so. Q: Okay. So, how do we know Omega is right? A: Because Omega is omniscient. Q: If Omega is omniscient, why can’t it tell us how to grow working organs without need for human sacrifice? A: Because there are limits to how much it knows. Q: Ok, so if I knew in advance that Omega is omniscient but has these limitations, why on earth am I working in a village helping 10 villagers instead of working on advancing Omega to the point where it doesnt have those limitations. (And if I don’t know this in advance, why would I suddenly start believing some random computer that claims it is Omega?) A: I don’t know, because it’s the LCPW.
That conversation yields a lot more intellectual value; it trains you to think creatively and explore all possible solutions, rather than devise a single heuristic that is only applicable in a 5d-corner case. As I indicated above, it can actually be dangerous because novice rationalists may feel compelled to apply that narrow heuristic to situations where a more optimal, creative solution is present.
I don’t think I could disagree more.
The point of ethical thought experiments like the sick villager problem is not to present a practical scenario to be solved; it’s to illuminate seeming contradictions in our values. Yes, a lot of them have some holes—where did the utility monster come from? Are there more of them with different preferences? Is it possible to make it happy by feeding it Tofumans? -- but spending a lot of time plugging them only distracts from the exercise. The reason they appear to be extreme corner cases is that only there does the contradiction show up without complications; but that doesn’t mean they’re not worth addressing on their own terms, the better to come up with a set of principles—not a narrow heuristic—that could be applied in real life. The LCPW, therefore, is a gentle reminder to think in those terms.
Training yourself to think creatively has value, of course; if you’re actually faced with sick villagers that you could save by chopping up healthy patients for their organs, you should by all means consider alternative solutions and have trained yourself for doing so. But this thought experiment isn’t aimed at a rural surgeon with an itchy bonesaw; it’s aimed at philosophers of ethics, or at armchair hobbyists of same. If you are such a person and you’re faced with something like that as a hypothetical, and you can’t prove that not only that hypothetical but no analogous situation will ever come up, engage with the apparent ethical contradiction or show that there isn’t one; don’t poke holes in the practical aspects of the hypothetical and congratulate yourself for it. That’s like a physicist saying “well, I can’t actually ride a beam of light, so obviously it’s not worth thinking about”.
Though you could decide the whole genre isn’t worth your time. That’s fine too. Philosophy of ethics is a little abstract for most people’s taste, including mine.
That’s fair. I understand the value: it exposes the weakness of using overly rigid heuristics by presenting a situation where those heuristics feel wrong. And I agree that it’s an evasion to nitpick the thought experiment in an attempt to avoid having to face the contradiction of your poorly-formed heuristics.
My standard response to thought-experiment questions is: “I would do everything possible to have my cake and eat it too.” In many cases, that response satisfies whoever asked the question. Immediately defaulting to the LCPW is putting words into the other person’s mouth by assuming they wouldn’t be satisfied with that ethical approach.
Making the LCPW something “normal” seriously underestimates how world-bendingly different it would be from reality. If we truly lived in the LCPW, in most cases it would be such a different reality from the one we exist in that it would require a completely different set of ethics, and I just haven’t really thought hard enough about it to generate a new system of ethics for each tailor-made LCPW.
Incidentally I don’t have a problem with the LCPW when it’s actually realistic, as is the case with the “charity” example.
I agree that insisting on assuming the LCPW is a lousy strategic approach to most real-world situations, which (as you say) don’t normally occur in the LCPW. And I agree that assuming it as an analytical stance towards hypothetical situations increases the chance that I’ll adopt it as a strategic approach to a real-world situation, and therefore has some non-zero cost.
That said, I would also say that a one-sided motivated “exploration” of a situation which happens to align with our pre-existing biases about that situation has some non-zero cost, for basically the same reasons.
The OP starts out by objecting to that sort of motivated cognition, and proposes the LCPW strategy as a way of countering it. You object to the LCPW strategy, but seem to disregard the initial problem of motivated cognition completely.
I suspect that in most cases, the impulse to challenge the assumptions of hypothetical situations does more harm than good, since motivated cognition is pretty pervasive among humans.
But, sure, in the rare case of a thinker who really does “explore assumptions about a difficult problem” rather than simply evade them, I agree that it’s an exercise that does more harm than good.
If you are such a thinker and primarily engage with such thinkers, that’s awesome; whatever you’re doing, keep it up!