Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?
Edit: I realize I’ve been unduly inflammatory about this. I’ll restrict myself in the future to offering non-torture alternative formulations of scenarios when appropriate.
Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?
We understand why edge cases and extremes are critical when testing a system—be that a program, a philosophy, a decision theory or even just a line of logic.
I wonder if that’s really what it is, just the writer attempting to make people take his position more seriously by crudely leveraging a sensationalized example.
I don’t think so. I think it’s more like the outward manifestation of some neurosis.
In this particular case, what is gained by inserting a thousand years of agony into the situation? How is this a critical test of anything besides the reader’s tolerance for tasteless hyperbole?
I think it’s more like the outward manifestation of some neurosis.
I suggest you think more carefully about whether you really want to endorse the standard of judging (and potentially dismissing) what people say based on hastily constructed theories about their personality flaws.
Anyway. Suppose you wanted to construct a hypothetical example that trades off, on the one hand, an immortal and basically positive lifespan, and on the other hand, X.
What X would you, thankfully neurosis-free and admirably aware of the importance of choosing good hypotheticals, choose that could plausibly be traded off for that?
I’m reminded of the old joke about a ham sandwich being preferable to eternal happiness.
I see your point, but I guess my problem is that I don’t see why constructing these tradeoffs is productive in the first place. It just seems like a party game where people ask what you’d do for a million dollars.
Like, in the situation here, with uploading, why does immortality even need to be part of equation? All he’s really saying is “intuitively, it doesn’t seem like an upload would ‘really’ be me”. What happens to the upload, and what happens to the original, is just a carnival of distractions anyway. We can easily swap them around and see that they have no bearing on the issue.
Yeah, as I said earlier, if you can’t think of a better way to have the conversations but don’t think those conversations are worth having at all, I have nothing to say to that.
Like any conversation, they’re interesting to the people they interest, and not to the people they don’t… I don’t really understand why people talk so much about football, for example.
I fundamentally disagree with your position. I had previously thought your question was one of ironic jest but now it seems like you have a genuine weakness when it comes to abstract thought.
“Outward manifestation of some neurosis”—now that challenges my tolerance for tasteless hyperbole. Personally insulting the entire community you are participating in without provocation? That is something that is a genuine indication of an unhealthy psychological trait. Most stable humans have a strong instinctive aversion to alienating themselves from communities in which they are an ongoing participant.
In the last meetup I went to, there was an obnoxious guy who was dominating the conversation, and somehow got into a relativism-based defense of something, I think just to be contrary.
Several other people jumped on him at this point, and soon the argument swung around to “what about torture? what if you were being tortured?” and he came up with rationalizations about how what doesn’t kill you makes you stronger, it’d be a great story, etc. etc., and so they kept piling on qualifications, saying “they torture you for 50 years and then execute you and you have no hope of rescue and blah blah blah”, trying to nail down boards over every possible ray of sunshine.
And of course even then, rationalizations were found, and his girlfriend took up the contrarian standard and soliloquized about how she was a survivor of a suicide attempt and believed it was always better to choose life, no matter how painful, and the other participants responded by cranking up the torture even further.
Did anything remotely productive come out of this? No, of course not. It was ugly, it was pointless, and frankly it was embarrassing to be a part of. I left.
I question whether I’m the screwed up one for not swallowing my alienation to this kind of behavior.
Someone was an ass in a conversation you were with. Evidently it affected you personally. But you have generalised that to a general assumption of neurosis for a broad class of people who happen to discuss abstract decision problems with ‘torture’ plugged in as the extreme case. More to the point you actively declare your smug judgement on the broad class ‘you people’. Apart from indicating a clearly defective model of human psychology that is unnecessarily anti-social behaviour.
The appropriate response to having an unpleasant conversation with an ass is not to become obnoxious yourself.
It wasn’t just one person, it was three or four. And it wasn’t just that they INVOKED torture, it was that they clung to it like a life preserver, like it was the magic ingredient for winning the argument.
This is so far outside the bounds of civil discourse, and yet it’s routine in this community. I don’t think it’s unwarranted to be generally concerned.
Also note that, besides thought experiments, “extreme negative utility” is also observed in religious discourse. I’d say Hell is probably the archetypal example of [someone proposing] infini-torture [to win an argument].
Off of the top of my head, torture and similar very unpleasant things are useful for at least two purposes.
As in this post, you could attempt to quantify how much you value something (in this case, effective immortality) by how long you would be willing to exist in an extremely uncomfortable state (such as being tortured.)
Similarly, if someone is attempting to make certain absolute statements (such as “I would never kill another human being.”) regardless of circumstance, such conjecture can be used to quantify how much negative utility they attribute to committing such an act.
If you feel severe discomfort in being in a conversation where someone is using torture as a hypothetical, I suppose that you could either leave the conversation or ask them to use a different hypothetical, but the whole point of using torture as a hypothetical in such a case is because it is extremely unpleasant, so their alternative, if chosen well, maybe be equally discomforting to you.
You are entirely justified in not swallowing your alienation to ugly, pointless, embarrassing, aggravating behavior like what you describe those folks engaging in.
Rejecting that doesn’t make you screwed up.
But the conversation you describe doesn’t suddenly become less ugly, pointless, embarrassing, or aggravating if these people had instead been being arguing the same way about, say, hypothetically losing billions of dollars, or hypothetically moving from Beverly Hills to Bangladesh, or hypothetically swimming the English Channel.
That is, I don’t think the event you’re describing justifies the conclusion you’re trying to use it to justify.
That said, I also don’t think you actually care about that.
In some sense, it’s not actually true… lots of hypotheticals on this site involve entirely mundane situations. But it’s true that when we start creating very large stakes hypotheticals, the torture implements come out.
I suspect it’s because we don’t know how to talk about the opposite direction, so the only way we know to discuss a huge relative disutility is to talk about pain. I mean, the thing that is to how-I-am-now as how-I-am-now is to a thousand years of pain is… well, what, exactly?
Why do people feel the need to discuss “huge relative disutilities”? What’s the difference between that and being obnoxiously hyperbolic?
In the current example, I’m not even sure what kind of point he’s trying to make. It sounds like he’s saying “Some people like bagels. But what if someone poisoned your bagel with a poison that made your blood turn into fire ants?”
Is this an autism thing? There were people doing this at the meetup I went to as well.
It seems like moral problems get a negative phrasing more often than not in general, not just when Yudkowsky is writing them. I mean, you have the Trolley problem, the violinist, pretty much all of these, the list goes on. Have you ever looked at the morality subsections of any philosophy forums? Everything is about rape, torture, murder etc. I just assumed that fear is a bigger motivator than potential pleasantness and is a common aspect of rhetoric in general. I think that at least on some level it’s just the name of the game, moral dilemma → reasoning over hard decisions during very negative situations, not because ethicist are autistic, but because that is the hard part of morality for most humans. When I overhear people arguing over moral issues, I hear them talking about whether torture is ever justified or if murder is ever o.k.
Arguing about whether the tradeoff of killing one fat man to save five people is justified is more meaningful to us as humans than debating whether, say; we should give children bigger lollipops if it means there can’t be as much raw material for puppy chow (ergo, we will end up with fewer puppies since we are all responsible and need to feed our puppies plenty, but we want as many puppies as possible because puppies are cute, but so are happy children).
This isn’t to say that simply because this is how it’s done currently means that it is the most rational way to carry on a moral dialogue, only that you seem to be committing a fundamental attribution error due to a lack of general exposure to moral dilemmas and the people arguing them.
Besides, it’s not like I’m thinking about torture all the time just because I’m considering moral dilemmas in the abstract. I think that most people can differentiate between an illustration meant to show a certain sort of puzzle and reality. I don’t get depressed or anxious after reading Lesswrong, if anything; I’m happier and more excited and revitalized. So I’m just not picking up on the neurosis angle at all, seems like it might be a mind projection fallacy?
Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there’s plenty of neurosis to go around, and that it runs all the way to the top.
I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don’t strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn’t what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven’t really seen any.
If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.
Maybe it’s the combination of these factors that’s the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.
But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision.
If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn’t surprising.
Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn’t getting anywhere near enough attention. I tend to agree with that idea.
Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them.
In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I’m now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right.
This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don’t seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn’t all that it should be, but it’s a valiant attempt, in my estimation.
There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.
FWIW, I’m neurotypical and not exceptionally obnoxious.
Can’t speak for “people.” I can speak a little bit to why I do it, when I do.
One difficulty with comparing consequentialist from deontological ethical frameworks is the fact that in many plausible scenarios, they make the same predictions. I can talk about why it’s a bad idea to rob a bank in terms of its consequences, but a deontologist will just shrug and say “Or, you can just acknowledge that it’s wrong to rob banks, which is simpler,” and it’s not clear we’ve accomplished anything.
So to disambiguate them, it’s helpful to introduce cases where optimizing consequences requires violating deontological rules. And to turn up the contrast, it’s often helpful to (a) choose really significant deontological rules, rather than peripheral ones, and (b) introduce very large differences between the value of the +rules and -rules conditions.
Which leads to large relative disutilities.
Now, one can certainly say “But why is comparing consequentialist from deontological ethical frameworks so important that you’re willing to think about such awful things in order to do it? Can’t you come up with nicer examples? Or, if not, think about something else altogether?”
To which I won’t have a response.
As for the current example, I’m not exactly sure what point he’s making either, but see my comment on the post for my best guess as to what point he’s making, and my reaction to that point.
I think part of what bothers me about these things is I get the impression the readers of lesswrong are PICKING UP these neuroses from each other, learning by example that this is how you go about things.
Need to clarify an ethical question, or get an intuitive read on some esoteric decision theory thing, or just make a point? Add torture! If yudkowsky does it, it must be a rational and healthy way to think, right?
Interesting. Yeah, I can see where that impression comes from, though I’m not sure it’s accurate.
If you notice me using hypothetical suffering in examples where you can come up with an alternate example that expresses the same things except for the suffering, feel free to call me on it, either publicly or privately.
Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?
Edit: I realize I’ve been unduly inflammatory about this. I’ll restrict myself in the future to offering non-torture alternative formulations of scenarios when appropriate.
We understand why edge cases and extremes are critical when testing a system—be that a program, a philosophy, a decision theory or even just a line of logic.
I wonder if that’s really what it is, just the writer attempting to make people take his position more seriously by crudely leveraging a sensationalized example.
I don’t think so. I think it’s more like the outward manifestation of some neurosis.
In this particular case, what is gained by inserting a thousand years of agony into the situation? How is this a critical test of anything besides the reader’s tolerance for tasteless hyperbole?
I suggest you think more carefully about whether you really want to endorse the standard of judging (and potentially dismissing) what people say based on hastily constructed theories about their personality flaws.
Anyway. Suppose you wanted to construct a hypothetical example that trades off, on the one hand, an immortal and basically positive lifespan, and on the other hand, X.
What X would you, thankfully neurosis-free and admirably aware of the importance of choosing good hypotheticals, choose that could plausibly be traded off for that?
I’m reminded of the old joke about a ham sandwich being preferable to eternal happiness.
I see your point, but I guess my problem is that I don’t see why constructing these tradeoffs is productive in the first place. It just seems like a party game where people ask what you’d do for a million dollars.
Like, in the situation here, with uploading, why does immortality even need to be part of equation? All he’s really saying is “intuitively, it doesn’t seem like an upload would ‘really’ be me”. What happens to the upload, and what happens to the original, is just a carnival of distractions anyway. We can easily swap them around and see that they have no bearing on the issue.
Yeah, as I said earlier, if you can’t think of a better way to have the conversations but don’t think those conversations are worth having at all, I have nothing to say to that.
Like any conversation, they’re interesting to the people they interest, and not to the people they don’t… I don’t really understand why people talk so much about football, for example.
I fundamentally disagree with your position. I had previously thought your question was one of ironic jest but now it seems like you have a genuine weakness when it comes to abstract thought.
“Outward manifestation of some neurosis”—now that challenges my tolerance for tasteless hyperbole. Personally insulting the entire community you are participating in without provocation? That is something that is a genuine indication of an unhealthy psychological trait. Most stable humans have a strong instinctive aversion to alienating themselves from communities in which they are an ongoing participant.
In the last meetup I went to, there was an obnoxious guy who was dominating the conversation, and somehow got into a relativism-based defense of something, I think just to be contrary.
Several other people jumped on him at this point, and soon the argument swung around to “what about torture? what if you were being tortured?” and he came up with rationalizations about how what doesn’t kill you makes you stronger, it’d be a great story, etc. etc., and so they kept piling on qualifications, saying “they torture you for 50 years and then execute you and you have no hope of rescue and blah blah blah”, trying to nail down boards over every possible ray of sunshine.
And of course even then, rationalizations were found, and his girlfriend took up the contrarian standard and soliloquized about how she was a survivor of a suicide attempt and believed it was always better to choose life, no matter how painful, and the other participants responded by cranking up the torture even further.
Did anything remotely productive come out of this? No, of course not. It was ugly, it was pointless, and frankly it was embarrassing to be a part of. I left.
I question whether I’m the screwed up one for not swallowing my alienation to this kind of behavior.
Someone was an ass in a conversation you were with. Evidently it affected you personally. But you have generalised that to a general assumption of neurosis for a broad class of people who happen to discuss abstract decision problems with ‘torture’ plugged in as the extreme case. More to the point you actively declare your smug judgement on the broad class ‘you people’. Apart from indicating a clearly defective model of human psychology that is unnecessarily anti-social behaviour.
The appropriate response to having an unpleasant conversation with an ass is not to become obnoxious yourself.
It wasn’t just one person, it was three or four. And it wasn’t just that they INVOKED torture, it was that they clung to it like a life preserver, like it was the magic ingredient for winning the argument.
This is so far outside the bounds of civil discourse, and yet it’s routine in this community. I don’t think it’s unwarranted to be generally concerned.
Also note that, besides thought experiments, “extreme negative utility” is also observed in religious discourse. I’d say Hell is probably the archetypal example of [someone proposing] infini-torture [to win an argument].
Off of the top of my head, torture and similar very unpleasant things are useful for at least two purposes.
As in this post, you could attempt to quantify how much you value something (in this case, effective immortality) by how long you would be willing to exist in an extremely uncomfortable state (such as being tortured.)
Similarly, if someone is attempting to make certain absolute statements (such as “I would never kill another human being.”) regardless of circumstance, such conjecture can be used to quantify how much negative utility they attribute to committing such an act.
If you feel severe discomfort in being in a conversation where someone is using torture as a hypothetical, I suppose that you could either leave the conversation or ask them to use a different hypothetical, but the whole point of using torture as a hypothetical in such a case is because it is extremely unpleasant, so their alternative, if chosen well, maybe be equally discomforting to you.
I agree that if clinging desperately to magic assertions for winning arguments were routine in this community, that would warrant concern.
I don’t agree that it is, in fact, routine here.
You are entirely justified in not swallowing your alienation to ugly, pointless, embarrassing, aggravating behavior like what you describe those folks engaging in.
Rejecting that doesn’t make you screwed up.
But the conversation you describe doesn’t suddenly become less ugly, pointless, embarrassing, or aggravating if these people had instead been being arguing the same way about, say, hypothetically losing billions of dollars, or hypothetically moving from Beverly Hills to Bangladesh, or hypothetically swimming the English Channel.
That is, I don’t think the event you’re describing justifies the conclusion you’re trying to use it to justify.
That said, I also don’t think you actually care about that.
I’ve often wondered that.
In some sense, it’s not actually true… lots of hypotheticals on this site involve entirely mundane situations. But it’s true that when we start creating very large stakes hypotheticals, the torture implements come out.
I suspect it’s because we don’t know how to talk about the opposite direction, so the only way we know to discuss a huge relative disutility is to talk about pain. I mean, the thing that is to how-I-am-now as how-I-am-now is to a thousand years of pain is… well, what, exactly?
Why do people feel the need to discuss “huge relative disutilities”? What’s the difference between that and being obnoxiously hyperbolic?
In the current example, I’m not even sure what kind of point he’s trying to make. It sounds like he’s saying “Some people like bagels. But what if someone poisoned your bagel with a poison that made your blood turn into fire ants?”
Is this an autism thing? There were people doing this at the meetup I went to as well.
I don’t know if it’s an autism thing… but I’m definitely going to have to include that in a hypothetical one of these days. :)
YES. Something like:
“So Omega offers you a bagel, saying ‘Here is a bagel. If this is a thought experiment, it is filled with fire-ant poison.’ Do you eat it?!”
?
Absolutely! After all, eating fire-ant poison in a thought experiment is harmless, whereas in the real world I’d have a tasty bagel.
Well, what other kind of disutility would you suggest that could conceivably counterbalance the attractiveness of immortality?
It seems like moral problems get a negative phrasing more often than not in general, not just when Yudkowsky is writing them. I mean, you have the Trolley problem, the violinist, pretty much all of these, the list goes on. Have you ever looked at the morality subsections of any philosophy forums? Everything is about rape, torture, murder etc. I just assumed that fear is a bigger motivator than potential pleasantness and is a common aspect of rhetoric in general. I think that at least on some level it’s just the name of the game, moral dilemma → reasoning over hard decisions during very negative situations, not because ethicist are autistic, but because that is the hard part of morality for most humans. When I overhear people arguing over moral issues, I hear them talking about whether torture is ever justified or if murder is ever o.k.
Arguing about whether the tradeoff of killing one fat man to save five people is justified is more meaningful to us as humans than debating whether, say; we should give children bigger lollipops if it means there can’t be as much raw material for puppy chow (ergo, we will end up with fewer puppies since we are all responsible and need to feed our puppies plenty, but we want as many puppies as possible because puppies are cute, but so are happy children).
This isn’t to say that simply because this is how it’s done currently means that it is the most rational way to carry on a moral dialogue, only that you seem to be committing a fundamental attribution error due to a lack of general exposure to moral dilemmas and the people arguing them.
Besides, it’s not like I’m thinking about torture all the time just because I’m considering moral dilemmas in the abstract. I think that most people can differentiate between an illustration meant to show a certain sort of puzzle and reality. I don’t get depressed or anxious after reading Lesswrong, if anything; I’m happier and more excited and revitalized. So I’m just not picking up on the neurosis angle at all, seems like it might be a mind projection fallacy?
Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there’s plenty of neurosis to go around, and that it runs all the way to the top.
I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don’t strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn’t what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven’t really seen any.
If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.
Maybe it’s the combination of these factors that’s the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.
But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision.
If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn’t surprising.
Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn’t getting anywhere near enough attention. I tend to agree with that idea.
Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them.
In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I’m now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right.
This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don’t seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn’t all that it should be, but it’s a valiant attempt, in my estimation.
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.
FWIW, I’m neurotypical and not exceptionally obnoxious.
Can’t speak for “people.” I can speak a little bit to why I do it, when I do.
One difficulty with comparing consequentialist from deontological ethical frameworks is the fact that in many plausible scenarios, they make the same predictions. I can talk about why it’s a bad idea to rob a bank in terms of its consequences, but a deontologist will just shrug and say “Or, you can just acknowledge that it’s wrong to rob banks, which is simpler,” and it’s not clear we’ve accomplished anything.
So to disambiguate them, it’s helpful to introduce cases where optimizing consequences requires violating deontological rules. And to turn up the contrast, it’s often helpful to (a) choose really significant deontological rules, rather than peripheral ones, and (b) introduce very large differences between the value of the +rules and -rules conditions.
Which leads to large relative disutilities.
Now, one can certainly say “But why is comparing consequentialist from deontological ethical frameworks so important that you’re willing to think about such awful things in order to do it? Can’t you come up with nicer examples? Or, if not, think about something else altogether?”
To which I won’t have a response.
As for the current example, I’m not exactly sure what point he’s making either, but see my comment on the post for my best guess as to what point he’s making, and my reaction to that point.
I think part of what bothers me about these things is I get the impression the readers of lesswrong are PICKING UP these neuroses from each other, learning by example that this is how you go about things.
Need to clarify an ethical question, or get an intuitive read on some esoteric decision theory thing, or just make a point? Add torture! If yudkowsky does it, it must be a rational and healthy way to think, right?
Interesting. Yeah, I can see where that impression comes from, though I’m not sure it’s accurate.
If you notice me using hypothetical suffering in examples where you can come up with an alternate example that expresses the same things except for the suffering, feel free to call me on it, either publicly or privately.
I cringed when I read about that “1000 years of terrible agony”. Just thinking about that is bad enough.