If it’s billions of people including a friend of mine, I suspect that my friend is worth about as much as they are in the 7billion-person world, + (billions-1) people who I’m apathetic about. I suspect I either get really confused at this point, or compartmentalize fiercely.
Thinking about this has caused me to realise that I already compartmentalise pretty fiercely. Some of the lines along which I compartmentalise are a little surprising when I investigate them closely… friend/non-friend is not the sharpest line of the lot.
One pretty sharp line is probably-trying-to-manipulate-me/probably-not-trying-to-manipulate-me. But I wouldn’t want to kill anyone on either side of that line (I wouldn’t even want to be rude to them without reason (though ‘he’s a telemarketer’ is reason for hanging up the phone on someone mid-sentance)). My brain seems to insist on lumping “have never met or interacted with, likely will never meet or interact with” in more-or-less the same category as “fictional”.
though ‘he’s a telemarketer’ is reason for hanging up the phone on someone mid-sentance
My brain seems to divide people among “playing characters” and “non-playing characters”, and telemarketers fall in the latter category. (The fact that my native language has a T-V distinction doesn’t help, though the distinction isn’t exactly the same.)
My brain seems to insist on lumping “have never met or interacted with, likely will never meet or interact with” in more-or-less the same category as “fictional”.
That sounds a lot like some sort of scope insensitivity than a revealed preference.
That edit does make your meaning clearer. It does so by highlighting that my phrasing was sloppy, so let me try to explain myself better.
Let us say that I hear of someone being mugged. My emotional reaction changes as a function of my relationship to the victim. If the victim is a friend, I am concerned and rush to check that he is OK. If the victim is an acquaintence, I am concerned and check that he is OK the next time I see him. If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I am mildly perturbed. If the victim is a fictional character, I am also mildly perturbed.
When considering only one person, those last two categories blur together in my mind somewhat.
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I am mildly perturbed. If the victim is a fictional character, I am also mildly perturbed.
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I shrug and think ‘so what? so many people get mugged every day, why should I worry about this one in particular?’ If it’s a fictional character, it depends on whether the author is good enough to switch me from far-mode to near-mode thinking.
Well, but this elides differences in the object with differences in the framing. I certainly agree that an author can change how I feel about a fictional character, but an author can also change how I feel about a real person whom I have never met or interacted with, and am unlikely to meet or interact with.
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I shrug and think ‘so what? so many people get mugged every day, why should I worry about this one in particular?’
Am I the only person here who is in any way moved by accounts of specific victims? Nonfiction writers can switch you to near-mode too, or at least they can to me.
OK, so you care about detailed accounts. Doesn’t that suggest that if you, y’know, knew more details about all those people being mugged, you would care more? So it’s just ignorance that leads you to discount their suffering?
Fictional accounts … well, people never have been great at distinguishing between imagination and reality, which, if you think about it, is actually really useful.
Really? My System 2 thinks System 2 is annoyingly incapable of seeing details, and System 1 is annoyingly incapable of seeing the big picture, and wants to use System 1 as a sort of zoom function to approximate something less broken.
Like army1987, I can be moved by accounts of specific victims, whether they are fictional or not. There is a bug here, and the bug is this; that I am moved the same amount by an otherwise identical fictional or nonfictional account, where the nonfictional account contains no-one with whom I have ever interacted.
That is, simply knowing that an account is non-fictional doesn’t affect my emotional reaction, one way or another. (This doesn’t mean I am entirely without sympathy for people I have never met—it simply means that I have equivalent sympathy for fictional characters). This is a bug; ideally, my emotional reaction should take into account such an important detail as whether or not something really happened. After all, what detail could be more important?
It’s not a bug, it’s a feature (in some contexts).
Consider you were playing 2 games of online chess against an anonymous opponent. You barely lose the first one. Now you’re feeling the spirit of competition, your blood boiling for revenge! Should you force yourself to relinquish the thrill of the contest, because “it doesn’t really matter”? That would be no fun! :-(
If you’re reading a work of fiction, knowing it is fiction, why are you doing so? Because emotional investment is fun? Why would you then sabotage your enjoyment by trying to downsize your emotional investment, since “it’s not real”? Also no fun! :-(
If the flawed heuristic you are employing in a certain context works in your favor in that context, switching it off would be dumb (although being vaguely aware of it would not be).
I’m not sure I’d characterize that as a “bug”, more a feature we need to be aware of and take into account.
If you weren’t moved by fictional scenarios, you wouldn’t be able to empathize with people in those scenarios—including your future self! We mostly predict other people’s actions by using our own brain as a black box, imaging ourselves in their situation and how we would react, so there goes any situation featuring other humans. And we couldn’t daydream or enjoy fiction, either.
Would it be useful to turn it off? Maaaybe, but as long as you don’t start taking hypothetical people’s wishes into account, and stop reading stuff that triggers you, you’re fine—I bet the consequences for misuse would be higher than the marginal benefits.
I don’t think that empathising with fictional characters should be turned off. I just think that properly calibrated emotions should take all factors into account, with properly relevant weightings. I notice that my emotions do not seem to be taking the ‘reality’ factor into account, and I therefore conclude that my emotions are poorly calibrated.
My future self would be a potentially real scenario, and thus would deserve all the emotional investment appropriate for a situation that may well come to pass. (He also gets the emotional investment for being me, which is quite large).
I’m not sure whether I should be feeling more sympathy for strangers, or less sympathy for fictional people.
So … are you saying that they’re poorly calibrated, but that’s fine and nothing to worry about as long as we don’t forget it and start giving imaginary people moral weight? Because if so, I agree with you on this.
More or less. I’m also saying that it might be nice if they were better calibrated. It’s not urgent or particularly important, it’s just something about myself that I noticed at the start of this discussion that I hadn’t noticed before.
That edit does make your meaning clearer. It does so by highlighting that my phrasing was sloppy, so let me try to explain myself better.
Fair enough.
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I am mildly perturbed. If the victim is a fictional character, I am also mildly perturbed.
That depends on much you know about/empathize with them, right?
That depends on much you know about/empathize with them, right?
Yes; but I can know as much about a fictional character as about a non-fictional character whom I have not interacted with. The dependency has nothing to do with the fictionality or lack thereof of the character.
Right, hence me quoting both the section on fictional and non-fictional characters.
To be honest, our brains don’t really seem to distinguish between fiction and non-fiction at all; it’s merely a question of context. Hence our reactions to fictional evidence and so forth. Lotta awkward biases you can catch from that what with our tendency to “buy in” to compelling narratives.
It’s not a bias if you value an additional dollar less once all your needs are met.
It’s not a bias if you value a random human life less if there are billions of others, compared to if there are only a few others.
You may choose for yourself to value a $10 bill the same whether you’re dirt poor, or a millionaire. Same with human lives. But you don’t get to “that’s a bias” others who have a more nuanced and context-sensitive estimation.
Except that humans actually have a bias called scope insensitivity and that’s a known thing, and it behaves differently to any claimed bounded utility function we might have.
Thinking about this has caused me to realise that I already compartmentalise pretty fiercely. Some of the lines along which I compartmentalise are a little surprising when I investigate them closely… friend/non-friend is not the sharpest line of the lot.
One pretty sharp line is probably-trying-to-manipulate-me/probably-not-trying-to-manipulate-me. But I wouldn’t want to kill anyone on either side of that line (I wouldn’t even want to be rude to them without reason (though ‘he’s a telemarketer’ is reason for hanging up the phone on someone mid-sentance)). My brain seems to insist on lumping “have never met or interacted with, likely will never meet or interact with” in more-or-less the same category as “fictional”.
My brain seems to divide people among “playing characters” and “non-playing characters”, and telemarketers fall in the latter category. (The fact that my native language has a T-V distinction doesn’t help, though the distinction isn’t exactly the same.)
That sounds a lot like some sort of scope insensitivity than a revealed preference.
I don’t think it’s scope insensitivity in this particular case, because I’m considering one-on-one interactions in this compartmentalisation.
Of course, this particular case did come to my mind as a side-effect of a discussion on scope insensitivity.
Sorry, I was replying to the last bit. Edited.
Who the hell downvotes a clarification? Upvoted back to 0.
That edit does make your meaning clearer. It does so by highlighting that my phrasing was sloppy, so let me try to explain myself better.
Let us say that I hear of someone being mugged. My emotional reaction changes as a function of my relationship to the victim. If the victim is a friend, I am concerned and rush to check that he is OK. If the victim is an acquaintence, I am concerned and check that he is OK the next time I see him. If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I am mildly perturbed. If the victim is a fictional character, I am also mildly perturbed.
When considering only one person, those last two categories blur together in my mind somewhat.
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I shrug and think ‘so what? so many people get mugged every day, why should I worry about this one in particular?’ If it’s a fictional character, it depends on whether the author is good enough to switch me from far-mode to near-mode thinking.
Well, but this elides differences in the object with differences in the framing. I certainly agree that an author can change how I feel about a fictional character, but an author can also change how I feel about a real person whom I have never met or interacted with, and am unlikely to meet or interact with.
Am I the only person here who is in any way moved by accounts of specific victims? Nonfiction writers can switch you to near-mode too, or at least they can to me.
If the account is detailed enough, it does move me, but not much more than an otherwise identical account that I know is fictional.
Phew! I was getting worried there.
OK, so you care about detailed accounts. Doesn’t that suggest that if you, y’know, knew more details about all those people being mugged, you would care more? So it’s just ignorance that leads you to discount their suffering?
Fictional accounts … well, people never have been great at distinguishing between imagination and reality, which, if you think about it, is actually really useful.
No, I mean that more details will switch my System 1 into near mode. My System 2 thinks that’s a bug, not a feature.
Really? My System 2 thinks System 2 is annoyingly incapable of seeing details, and System 1 is annoyingly incapable of seeing the big picture, and wants to use System 1 as a sort of zoom function to approximate something less broken.
I guess I’m unusual in this regard?
Like army1987, I can be moved by accounts of specific victims, whether they are fictional or not. There is a bug here, and the bug is this; that I am moved the same amount by an otherwise identical fictional or nonfictional account, where the nonfictional account contains no-one with whom I have ever interacted.
That is, simply knowing that an account is non-fictional doesn’t affect my emotional reaction, one way or another. (This doesn’t mean I am entirely without sympathy for people I have never met—it simply means that I have equivalent sympathy for fictional characters). This is a bug; ideally, my emotional reaction should take into account such an important detail as whether or not something really happened. After all, what detail could be more important?
It’s not a bug, it’s a feature (in some contexts).
Consider you were playing 2 games of online chess against an anonymous opponent. You barely lose the first one. Now you’re feeling the spirit of competition, your blood boiling for revenge! Should you force yourself to relinquish the thrill of the contest, because “it doesn’t really matter”? That would be no fun! :-(
If you’re reading a work of fiction, knowing it is fiction, why are you doing so? Because emotional investment is fun? Why would you then sabotage your enjoyment by trying to downsize your emotional investment, since “it’s not real”? Also no fun! :-(
If the flawed heuristic you are employing in a certain context works in your favor in that context, switching it off would be dumb (although being vaguely aware of it would not be).
Oh, it does matter. There’s a real opponent there. That’s reality.
You make a good point.
I’m not sure I’d characterize that as a “bug”, more a feature we need to be aware of and take into account.
If you weren’t moved by fictional scenarios, you wouldn’t be able to empathize with people in those scenarios—including your future self! We mostly predict other people’s actions by using our own brain as a black box, imaging ourselves in their situation and how we would react, so there goes any situation featuring other humans. And we couldn’t daydream or enjoy fiction, either.
Would it be useful to turn it off? Maaaybe, but as long as you don’t start taking hypothetical people’s wishes into account, and stop reading stuff that triggers you, you’re fine—I bet the consequences for misuse would be higher than the marginal benefits.
I don’t think that empathising with fictional characters should be turned off. I just think that properly calibrated emotions should take all factors into account, with properly relevant weightings. I notice that my emotions do not seem to be taking the ‘reality’ factor into account, and I therefore conclude that my emotions are poorly calibrated.
My future self would be a potentially real scenario, and thus would deserve all the emotional investment appropriate for a situation that may well come to pass. (He also gets the emotional investment for being me, which is quite large).
I’m not sure whether I should be feeling more sympathy for strangers, or less sympathy for fictional people.
So … are you saying that they’re poorly calibrated, but that’s fine and nothing to worry about as long as we don’t forget it and start giving imaginary people moral weight? Because if so, I agree with you on this.
More or less. I’m also saying that it might be nice if they were better calibrated. It’s not urgent or particularly important, it’s just something about myself that I noticed at the start of this discussion that I hadn’t noticed before.
Fair enough. Tapping out, since this seems to have resolved itself.
Fair enough.
That depends on much you know about/empathize with them, right?
Yes; but I can know as much about a fictional character as about a non-fictional character whom I have not interacted with. The dependency has nothing to do with the fictionality or lack thereof of the character.
Right, hence me quoting both the section on fictional and non-fictional characters.
To be honest, our brains don’t really seem to distinguish between fiction and non-fiction at all; it’s merely a question of context. Hence our reactions to fictional evidence and so forth. Lotta awkward biases you can catch from that what with our tendency to “buy in” to compelling narratives.
It’s not a bias if you value an additional dollar less once all your needs are met.
It’s not a bias if you value a random human life less if there are billions of others, compared to if there are only a few others.
You may choose for yourself to value a $10 bill the same whether you’re dirt poor, or a millionaire. Same with human lives. But you don’t get to “that’s a bias” others who have a more nuanced and context-sensitive estimation.
Except that humans actually have a bias called scope insensitivity and that’s a known thing, and it behaves differently to any claimed bounded utility function we might have.