sure, how about being in a village taken over by the Khmer Rouge, or a concentration camp in Nazi Germany? Someplace where you don’t necessary die quickly but have to endure a long and very unpleasant time with some amount of psychological or physical pain.
wstrinz
I get the feeling I’m not just completing a full application of the definition here, but where does this apply to serious, terrible, “let’s just imagine they threw you in hell for a few days” suffering. Sure, one can say that it’s mostly pain being imagined, and the massive overload of a sensor not designed for such environments is really what we’re bothered by, but is there a way that the part of this we usually talk about as ‘suffering’ fits into the attention-allocation narrative? Or are we talking about two different things here?
All the same, I find this fascinating and am going to experiment with it in my daily life. Looking forward to the non-content focused post.
I agree, I can’t really reliably predict my actions. I think I know the morally correct thing to do, but I’m skeptical of my (or anyone’s) ability to make reliable predictions about their actions under extreme stress. As I said, I usually use this when people seem overly confident of the consistency of their morality and their ability to follow it, as well as with people who question the plausibility of the original problem.
But I do recall the response distributions for this question mirroring the distribution for the second trolley problem; far fewer take the purely consequentialist view of morality than when they just have to flip a switch, even independent from their ability to act morally. I still don’t find it incredibly illuminating, as all it shows is that our moral intuitions are fundamentally fuzzy, or at least that we value things other than just how many people live or die.
Great point. I’ve never thought of that and no-one I’ve ever tried this one has mentioned it either. This makes it more interesting to me that some people still wouldn’t kill the baby, but that may be for reasons other than real moral calculation.
I hope I’d do the same. I’ve never had to kill anyone before though, much less my own baby, so I can’t be totally sure I’d be capable of it.
I’ve used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you’re the mother of a baby in a village in Vietnam, and you’re hiding with the rest of the village from the Viet Cong. Your baby starts to cry, and you know if it does they’ll find you and kill the whole village. But, you could smother the baby (your baby!) and save everyone else. The size of the village can be adjusted up or down to hammer in the point. Crucially, I lie at first and say this is an actual historical event that really happened.
I usually save this one for people who smugly answer both trolly questions with “they’re the same, of course I’d kill one to save 5 in each case”, but it’s also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I’m not sure why this works so well, but I think our bias toward narratives we can place ourselves in helps. Almost everyone at this point says they think they should kill the baby, but they just don’t think they could, to which I respond “Doesn’t the world make more sense when you realize you value thousands of complex things in a fuzzy and inconsistent manner?”. Unfortunately, I have yet to make friends with any true psychopaths. I’d be interested to hear their responses.
despite all of the concerns raised in lionhearted’s post, and everything that’s been written on LW about how analytic types have trouble getting along without getting defensive and prickly, I still think I wouldn’t see a response like this just about anywhere else on the internet. karma points to you
I like this post a lot, it speaks to some of my concerns about this community and about the sorts of people I’d like to surround myself with. As an analytical/systematizing/whatever (I got a 35 on the test Roko posted a while back, interpret from that what you will), I felt very strange about all of these rhetorical games for most of my life. It was only when I discovered signalling theory in my study of economics that it started to make sense. If I frame my social interactions as signalling problems, the goals and the ways I should achieve them seem to become a lot clearer. It’s also a useful outsider perspective; I find that I can (occasionally) recognize what people are really trying to do and how better than them simply because they have a richer and more complicated view of human socializing.
While I agree with most everything you’ve posted in the abstract, assuming your whole goal really is to have a reasoned discussion to provide evidence on which to update your beliefs, I think we often misunderstand our own intentions. Although the most common counterargument to “you should be nicer in how you frame your responses” is “I don’t have time for all that fluff”, there is a whole lot more going on there. First there are the personal signaling goals; not just to demonstrate that you are clever, but also that you are the sort of person who is upfront and honest (something people value) and not overly concerned with status in the eyes of your peers. This second one is amusing, since the purpose of the signal is to raise your status with the people who value not valuing status too much. I think children acting “too cool” to try hard in school is a good example of this, among other things. Additionally, there is the enforcement of group norms, not all of which have to be in perfect harmony. For example, LW has a group norm of open discussion with an eye toward gathering information about and improving human rationality. However, we also have the norm of not suffering fools gladly (what would happen if some nut posted stuff about The Secret as a comment on the quantum physics sequence?). Oftentimes these two sync up, since we don’t want the discussion polluted by noisy nonsense, but sometimes they don’t; I’ve seen more than a few people with valuable ideas leave this community because of hostile treatment.
None of this is to say that people shouldn’t be nicer if they want to achieve their higher goals. That’s a principle I’ve tried to operate on, and I think it’s a good one. My point is that, just like the player who seems to play the ultimatum game irrationally by denying unfair offers, people who respond acrimoniously to their peers are often playing a game with a different goal, whether they know it or not. It may still be irrational, but its irrational in a complex and multifaceted way, and I doubt one post, or even a whole sprawling sequence of posts, would be enough to touch on all the complicated signals involved in this. The fact that the norm enforcement algorithm feels a lot more individualistic and noble from the inside than it looks from the outside makes the whole problem a lot worse.
You said pretty much what I was thinking. My (main) motivation for copying myself would be to make sure there is still a version of the matter/energy pattern wstrinz instantiated in the world in the event that one of us gets run over by a bus. If the copy has to stay completely separate from me, I don’t really care about it (and I imagine it doesn’t really care about me).
As with many uploading/anthropics problems, I find abusing Many Worlds to be a good way to get at this. Does it make me especially happy that there’s a huge number of other me’s in other universes? Not really. Would I give you anything, time or money, if you could credibly claim to be able to produce another universe with another me in it? probably not.
Man I wish I weren’t away at college, I’d love to come to a Less Wrong meetup in my hometown...
Once I understood the theory, my first question was has this been explained to any delusional patient with a good grasp of probability theory? I know this sort of thing generally doesn’t work, but the n=1 experiment you mention is intriguing. I suppose what is more often interesting to me is what sorts of things people come up with to dismiss conflicting evidence, since it is in a strange place between completely random and clever lie. If you have a dragon in your garage about something you tend to give the most plausible excuses because you know, deep down, the truth about the phenomenon so you can construct your explanation around that recognition of the way the world actually is. Delusional patients, by contrast, say things like “this is my daughters arm” that just don’t make any sense, and indicate to us in an eerie way just how deeply they believe their delusions. I’m surprised that, given the contributions from the study of injured brains to neurobiology, there’s not a bigger focus on the study of abnormal mental systems in cognitive science and decision theory, not that I’m the first person to wonder this or anything