If you try to do moral philosophy, you inevitably end up thinking a lot about people getting run over by trolleys and such. Also if you want to design good chairs, you need to understand people’s butts really well. Though of course you’re allowed to say it’s a creepy job but still enjoy the results of that job :-)
One of the major goals of Less Wrong is to analyze our cognitive algorithms. When analyzing algorithms, it’s very important to consider corner cases. Torture is an example of extreme disutility, so it naturally comes up as a test case for moral algorithms.
I’ve heard that before, and I grant that there’s some validity to it, but that’s not all that’s going on here. 90% of the time, torture isn’t even relevant to the question the what-if is designed to answer.
The use of torture in these hypotheticals generally seems to have less to do with ANALYZING cognitive algorithms, and more to do with “getting tough” on cognitive algorithms. Grinding an axe or just wallowing in self-destructive paranoia.
If the point you’re making really only applies to torture, fine. But otherwise, it tends to read like “Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE”
There’s a number of things that make me not want to self-identify as a lesswrong user, and not bring up lesswrong with people who might otherwise be interested in it, and this is one of the big ones.
“Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE”
Not necessarily even wrong. The higher the stakes, the more people will care about getting a winning outcome instead of being reasonable. It’s a legit way to cut through the crap to real instrumental rationality. Eliezer uses it in his TDT paper (page 51):
… imagine a Newcomb’s Problem in which a black hole is hurtling toward
Earth, to wipe out you and everything you love. Box B is either empty or contains
a black hole deflection device. Box A as ever transparently contains $1000. Are
you tempted to do something irrational? Are you tempted to change algorithms
so that you are no longer a causal decision agent, saying, perhaps, that though
you treasure your rationality, you treasure Earth’s life more?
Creepily heavy reliance on torture-based what-if scenarios.
If you try to do moral philosophy, you inevitably end up thinking a lot about people getting run over by trolleys and such. Also if you want to design good chairs, you need to understand people’s butts really well. Though of course you’re allowed to say it’s a creepy job but still enjoy the results of that job :-)
I haven’t read TOO much mainstream philosophy, but in what I have, I don’t recall even a single instance of torture being used to illustrate a point.
Maybe that’s what’s holding them back from being truly rational?
I agree. I wrote the article you’re citing. I was hoping that by mocking it properly it would go away.
One of the major goals of Less Wrong is to analyze our cognitive algorithms. When analyzing algorithms, it’s very important to consider corner cases. Torture is an example of extreme disutility, so it naturally comes up as a test case for moral algorithms.
I’ve heard that before, and I grant that there’s some validity to it, but that’s not all that’s going on here. 90% of the time, torture isn’t even relevant to the question the what-if is designed to answer.
The use of torture in these hypotheticals generally seems to have less to do with ANALYZING cognitive algorithms, and more to do with “getting tough” on cognitive algorithms. Grinding an axe or just wallowing in self-destructive paranoia.
If the point you’re making really only applies to torture, fine. But otherwise, it tends to read like “Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE”
There’s a number of things that make me not want to self-identify as a lesswrong user, and not bring up lesswrong with people who might otherwise be interested in it, and this is one of the big ones.
Not necessarily even wrong. The higher the stakes, the more people will care about getting a winning outcome instead of being reasonable. It’s a legit way to cut through the crap to real instrumental rationality. Eliezer uses it in his TDT paper (page 51):