It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
That doesn’t seem unreasonable. THat knowledge is probably worse than the speck.
That’s a really good point. Does the “repugnant conclusion” problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?
I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn’t have any relevant choice to deprive him of.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
Not that I can tell.
I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else’s 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?
That doesn’t seem unreasonable. THat knowledge is probably worse than the speck.
Sure, it does have the odd implication that discovering or publicizing unpleasant truths can be morally wrong, though.
That’s a really good point. Does the “repugnant conclusion” problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?
I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn’t have any relevant choice to deprive him of.
Not that I can tell.
I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else’s 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?