My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably valid.
In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks. Which brings us to an interesting question—is morality carried by events or by information about events? If nobody else knew of my choice, would that make it better?
For a utilitarian, the answer is clearly that the information about morally significant events is what matters. I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.
Also, I’m interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your “utility is all that matters” philosophy go?
There’s something really odd about characterizing “torture is preferable to this utterly unrealizable thing” as “advocating torture.”
It’s not obviously wrong… I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they’d brought their audience along swap it out for simply “torture is preferable to alternatives”, using the same kind of rhetorical techniques you use here… but it doesn’t seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you.
Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It’s possible that keeping the torture a secret would have net positive utility; it’s possible it would have net negative utility.
All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here).
For a utilitarian, the answer is clearly that the information about morally significant events is what matters.
Well, in some bizarre sense that’s true. I mean, if I’m being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It’s decidedly unclear in what sense it’s an event at all.)
I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.
Sure, that seems likely.
I’m interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your “utility is all that matters” philosophy go?
I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50 years, incidentally. Sometimes it is, sometimes it isn’t. Given that choice, I would prefer to die… and in many scenarios I endorse that choice.)
There’s something really odd about characterizing “torture is preferable to this utterly unrealizable thing” as “advocating torture.”
You know, in natural language “x is better than y” often has the connotation “x is good”, and people go at lengths to avoid such wordings if they don’t want that connotation. For example, “‘light’ cigarettes are no safer than regular ones” is logically equivalent to “regular cigarettes are at least as safe as ‘light’ ones”, but I can’t imagine an anti-smoking campaign saying the latter.
Fair enough. For maximal precision I suppose I ought to have said “I reject your characterization of...” rather than “There’s something really odd about characterizing...,” but I felt some polite indirection was called for.
Well, in some bizarre sense that’s true. I mean, if I’m being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It’s decidedly unclear in what sense it’s an event at all.)
Well, assuming the torture is artificially bounded to absolute impactlessness, then yes, it is irrelevant (in fact, it arguably doesn’t even exist). However, a good rationalist utilitarian will retroactively consider future effects of the torture, supposing it is not so bounded, and once the fact of the torture can then be deduced, it does retroactively become a morally significant event in a timeless perspective, if I understand the theory properly.
The point was not necessarily to advocate torture. It’s to take the math seriously.
In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks.
Just how many people do you expect to hear about the torture? Have you taken seriously how big a number 3^^^3 is? By how many utilons do you expect their disutility to exceed the disutility from the dust specks?
First, I don’t buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I’m just not biting those bullets. I don’t think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer’s recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.
Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me on that from the Complex Values post and posts about the evils of paperclip maximizers. So great evils are qualitatively different to me from small evils, even small evils done to a great number of people!
I get what you’re trying to do here. You’re trying to demonstrate that ordinary people are innumerate, and you all are getting a utility spike from imagining you’re more rational than them by choosing the “right” (naive hyper-rational utilitarian-algebraist) answer. But I don’t think it’s that simple when we’re talking about morality. If it were, the philosophical project that’s lasted 2500 years would finally be over!
You were the one who claimed that the mental discomfort from hearing about torture would swamp the disutility from the dust specks—I assumed from that, that you thought they were commensurable. I thought it was odd that you thought they were commensurable but thought the math worked out in the opposite direction.
I believe Eliezer’s post was not so much directed at folks who disagree with utilitarianism—rather, it’s supposed to be about taking the math seriously, for those who are. If you’re not a utilitarian, you can freely regard it as another reductio.
You don’t have to be any sort of simple or naive utilitarian to encounter this problem. As long as goods are in any way commensurable, you need to actually do the math. And it’s hard to make a case for a utilitarianism in which goods are not commensurable—in practice, we can spend money towards any sort of good, and we don’t favor only spending money on the highest-order ones, so that strongly suggests commensurability.
I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.
My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably valid.
In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks. Which brings us to an interesting question—is morality carried by events or by information about events? If nobody else knew of my choice, would that make it better?
For a utilitarian, the answer is clearly that the information about morally significant events is what matters. I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.
Also, I’m interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your “utility is all that matters” philosophy go?
There’s something really odd about characterizing “torture is preferable to this utterly unrealizable thing” as “advocating torture.”
It’s not obviously wrong… I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they’d brought their audience along swap it out for simply “torture is preferable to alternatives”, using the same kind of rhetorical techniques you use here… but it doesn’t seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you.
Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It’s possible that keeping the torture a secret would have net positive utility; it’s possible it would have net negative utility.
All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here).
Well, in some bizarre sense that’s true. I mean, if I’m being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It’s decidedly unclear in what sense it’s an event at all.)
Sure, that seems likely.
I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50 years, incidentally. Sometimes it is, sometimes it isn’t. Given that choice, I would prefer to die… and in many scenarios I endorse that choice.)
You know, in natural language “x is better than y” often has the connotation “x is good”, and people go at lengths to avoid such wordings if they don’t want that connotation. For example, “‘light’ cigarettes are no safer than regular ones” is logically equivalent to “regular cigarettes are at least as safe as ‘light’ ones”, but I can’t imagine an anti-smoking campaign saying the latter.
Fair enough. For maximal precision I suppose I ought to have said “I reject your characterization of...” rather than “There’s something really odd about characterizing...,” but I felt some polite indirection was called for.
Well, assuming the torture is artificially bounded to absolute impactlessness, then yes, it is irrelevant (in fact, it arguably doesn’t even exist). However, a good rationalist utilitarian will retroactively consider future effects of the torture, supposing it is not so bounded, and once the fact of the torture can then be deduced, it does retroactively become a morally significant event in a timeless perspective, if I understand the theory properly.
The point was not necessarily to advocate torture. It’s to take the math seriously.
Just how many people do you expect to hear about the torture? Have you taken seriously how big a number 3^^^3 is? By how many utilons do you expect their disutility to exceed the disutility from the dust specks?
First, I don’t buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I’m just not biting those bullets. I don’t think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer’s recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.
Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me on that from the Complex Values post and posts about the evils of paperclip maximizers. So great evils are qualitatively different to me from small evils, even small evils done to a great number of people!
I get what you’re trying to do here. You’re trying to demonstrate that ordinary people are innumerate, and you all are getting a utility spike from imagining you’re more rational than them by choosing the “right” (naive hyper-rational utilitarian-algebraist) answer. But I don’t think it’s that simple when we’re talking about morality. If it were, the philosophical project that’s lasted 2500 years would finally be over!
You were the one who claimed that the mental discomfort from hearing about torture would swamp the disutility from the dust specks—I assumed from that, that you thought they were commensurable. I thought it was odd that you thought they were commensurable but thought the math worked out in the opposite direction.
I believe Eliezer’s post was not so much directed at folks who disagree with utilitarianism—rather, it’s supposed to be about taking the math seriously, for those who are. If you’re not a utilitarian, you can freely regard it as another reductio.
You don’t have to be any sort of simple or naive utilitarian to encounter this problem. As long as goods are in any way commensurable, you need to actually do the math. And it’s hard to make a case for a utilitarianism in which goods are not commensurable—in practice, we can spend money towards any sort of good, and we don’t favor only spending money on the highest-order ones, so that strongly suggests commensurability.