There’s something really odd about characterizing “torture is preferable to this utterly unrealizable thing” as “advocating torture.”
It’s not obviously wrong… I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they’d brought their audience along swap it out for simply “torture is preferable to alternatives”, using the same kind of rhetorical techniques you use here… but it doesn’t seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you.
Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It’s possible that keeping the torture a secret would have net positive utility; it’s possible it would have net negative utility.
All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here).
For a utilitarian, the answer is clearly that the information about morally significant events is what matters.
Well, in some bizarre sense that’s true. I mean, if I’m being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It’s decidedly unclear in what sense it’s an event at all.)
I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.
Sure, that seems likely.
I’m interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your “utility is all that matters” philosophy go?
I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50 years, incidentally. Sometimes it is, sometimes it isn’t. Given that choice, I would prefer to die… and in many scenarios I endorse that choice.)
There’s something really odd about characterizing “torture is preferable to this utterly unrealizable thing” as “advocating torture.”
You know, in natural language “x is better than y” often has the connotation “x is good”, and people go at lengths to avoid such wordings if they don’t want that connotation. For example, “‘light’ cigarettes are no safer than regular ones” is logically equivalent to “regular cigarettes are at least as safe as ‘light’ ones”, but I can’t imagine an anti-smoking campaign saying the latter.
Fair enough. For maximal precision I suppose I ought to have said “I reject your characterization of...” rather than “There’s something really odd about characterizing...,” but I felt some polite indirection was called for.
Well, in some bizarre sense that’s true. I mean, if I’m being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It’s decidedly unclear in what sense it’s an event at all.)
Well, assuming the torture is artificially bounded to absolute impactlessness, then yes, it is irrelevant (in fact, it arguably doesn’t even exist). However, a good rationalist utilitarian will retroactively consider future effects of the torture, supposing it is not so bounded, and once the fact of the torture can then be deduced, it does retroactively become a morally significant event in a timeless perspective, if I understand the theory properly.
There’s something really odd about characterizing “torture is preferable to this utterly unrealizable thing” as “advocating torture.”
It’s not obviously wrong… I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they’d brought their audience along swap it out for simply “torture is preferable to alternatives”, using the same kind of rhetorical techniques you use here… but it doesn’t seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you.
Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It’s possible that keeping the torture a secret would have net positive utility; it’s possible it would have net negative utility.
All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here).
Well, in some bizarre sense that’s true. I mean, if I’m being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It’s decidedly unclear in what sense it’s an event at all.)
Sure, that seems likely.
I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50 years, incidentally. Sometimes it is, sometimes it isn’t. Given that choice, I would prefer to die… and in many scenarios I endorse that choice.)
You know, in natural language “x is better than y” often has the connotation “x is good”, and people go at lengths to avoid such wordings if they don’t want that connotation. For example, “‘light’ cigarettes are no safer than regular ones” is logically equivalent to “regular cigarettes are at least as safe as ‘light’ ones”, but I can’t imagine an anti-smoking campaign saying the latter.
Fair enough. For maximal precision I suppose I ought to have said “I reject your characterization of...” rather than “There’s something really odd about characterizing...,” but I felt some polite indirection was called for.
Well, assuming the torture is artificially bounded to absolute impactlessness, then yes, it is irrelevant (in fact, it arguably doesn’t even exist). However, a good rationalist utilitarian will retroactively consider future effects of the torture, supposing it is not so bounded, and once the fact of the torture can then be deduced, it does retroactively become a morally significant event in a timeless perspective, if I understand the theory properly.