The “people are extraordinarily more altruistic-motivated than they actually are” bias is so pernicious and widespread I’ve never actually seen it articulated in detail or argued for.
I haven’t seen it articulated, or even mentioned. What is it? It sounds like this is just the common amnesia (or denial) of the rampant hypocrisy in most humans, but I’ve not heard that phrasing.
would it be fair to replace the first “are” (and maybe the second) with something that doesn’t imply essentialism or identity? “people are assumed to be” or “people claim to be” followed by “more altruistic than their behavior exhibits”?
The most salient example of the bias I can think of comes from reading interviews/books about the people who worked in the extermination camps in the holocaust. In my personal opinion, all the evidence points to them being literally normal people, representative of the average police officer or civil service member pre-1931. Holocaust historians nevertheless typically try very hard to outline some way in which Franz Stangl and crew were specially selected for lack of empathy, instead of raising the more obvious hypothesis that the median person is just not that upset by murdering strangers in a mildly indirected way, because the wonderful-humans bias demands a different conclusion.
This goes double in general for the entire public conception of killing as the most evil-feeling thing that humans can do, contrasted with actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.
I may have the same bias, and may in fact believe it’s not a bias. People are highly mutable and contextual in how they perceive others, especially strangers, especially when they’re framed as outgroup.
The fact that a LOT of people could be killers and torturers in the right (or very wrong) circumstances doesn’t seem surprising to me, and this doesn’t contradict my beliefs that many or perhaps most do genuinely care about others with a better framing and circumstances.
There is certainly a selection effect, likewise for modern criminal-related work, that people with the ability to frame “otherness” and some individual-power drive, tend to be drawn to it. There are certainly lots of Germans who did not participate in those crimes, and lots of current humans who prefer to ignore the question of what violence is used against various subgroups*.
But there’s also a large dollop of “humans aren’t automatically ANYTHING”. They’re far more complex and reactive than a simple view can encompass.
* OH! that’s a bias that’s insanely common. I said “violence against subgroups” rather than “violence by individuals against individuals, motivated by membership and identification with different subgroups”.
I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.
Not really memoirs but a German documentary about WWII might be of interest for you. Der unbekannte Soldat
I watched on Amazon Prime and you can still find the title there in a search, not sure if it is only available for rent/sale now or if you can stream with Prime membership.
I haven’t seen it articulated, or even mentioned. What is it? It sounds like this is just the common amnesia (or denial) of the rampant hypocrisy in most humans, but I’ve not heard that phrasing.
would it be fair to replace the first “are” (and maybe the second) with something that doesn’t imply essentialism or identity? “people are assumed to be” or “people claim to be” followed by “more altruistic than their behavior exhibits”?
The most salient example of the bias I can think of comes from reading interviews/books about the people who worked in the extermination camps in the holocaust. In my personal opinion, all the evidence points to them being literally normal people, representative of the average police officer or civil service member pre-1931. Holocaust historians nevertheless typically try very hard to outline some way in which Franz Stangl and crew were specially selected for lack of empathy, instead of raising the more obvious hypothesis that the median person is just not that upset by murdering strangers in a mildly indirected way, because the wonderful-humans bias demands a different conclusion.
This goes double in general for the entire public conception of killing as the most evil-feeling thing that humans can do, contrasted with actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.
I may have the same bias, and may in fact believe it’s not a bias. People are highly mutable and contextual in how they perceive others, especially strangers, especially when they’re framed as outgroup.
The fact that a LOT of people could be killers and torturers in the right (or very wrong) circumstances doesn’t seem surprising to me, and this doesn’t contradict my beliefs that many or perhaps most do genuinely care about others with a better framing and circumstances.
There is certainly a selection effect, likewise for modern criminal-related work, that people with the ability to frame “otherness” and some individual-power drive, tend to be drawn to it. There are certainly lots of Germans who did not participate in those crimes, and lots of current humans who prefer to ignore the question of what violence is used against various subgroups*.
But there’s also a large dollop of “humans aren’t automatically ANYTHING”. They’re far more complex and reactive than a simple view can encompass.
* OH! that’s a bias that’s insanely common. I said “violence against subgroups” rather than “violence by individuals against individuals, motivated by membership and identification with different subgroups”.
Yeah, I echo this.
I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
Ah yeah, that makes sense. I guess utility isn’t really the right term to use here.
Recommendations for such memoirs?
Not really memoirs but a German documentary about WWII might be of interest for you. Der unbekannte Soldat
I watched on Amazon Prime and you can still find the title there in a search, not sure if it is only available for rent/sale now or if you can stream with Prime membership.