I wonder how much of that is a “don’t speak bad of the dead” reflex, or “nobody could have seen it, so it’s not my fault I didn’t”, or even just “I’m such a good & loving friend/relative I didn’t see anything wrong with him”.
I’m sure there are cases that really came out of the blue, but I also have a nagging feeling that if you could interview the same people before the something horrible, and do it from an insider point of view (i.e., a question asked by another friend of the interviewee rather than by a reporter), a lot of answers would be of the “he’s kind of a weirdo” type.
Now update on the amount of people who call somebody “a weirdo” who does not end up murdering anyone. And add the negative halo effect, and fundamental attribution fallacy, from knowing in hindsight that the person you’re talking about has recently murdered someone.
As I said, I don’t really have any real evidence, and I believe it’d be very hard to collect. That said:
Now update on the amount of people who call somebody “a weirdo” who does not end up murdering anyone.
I’m not quite sure I understand what you mean by this. Let H=(did something horrible), S=(really suspicious), W=(just a bit creepy, weird, etc.).
I suspect that (H & S) > (H & W & !S) > (H & !W & !S) and that 1 > H/S >> H/(W & !S) >> H/(!W & !S). All fractions are low, but I’m not sure what you mean to say by that.
And add the negative halo effect, and fundamental attribution fallacy, from knowing in hindsight that the person you’re talking about has recently murdered someone.
I’m pretty sure such effects are not linearly additive. Especially when there’s a conflict (friend/non-hated-family, did something bad), I don’t think you can determine the result just by logic, you have to see what people actually do.
Notice how media narratives tend to become either “I always knew he was up to no good” or “I’d never have thought he would do something like that”, but you almost never hear something in the middle. I’m even having trouble finding a concise wording for a middle case other than “meh”.
I’m sure the media has a lot to do with that, showing just the witnesses with the most “interesting” story, but I’m almost sure people also do this more-or-less automatically in their heads.
I’m not quite sure I understand what you mean by this.
You said,
if you could interview the same people before the something horrible [...] a lot of answers would be of the “he’s kind of a weirdo” type.
What I meant was: you should also consider the amount of cases where people said the same “he’s kind of a weirdo”, but that person did not go on to do something horrible. And also the amount of cases where people did not say it, and yet the person did something horrible. All three are necessary to calculating the strength of the evidence “people say he’s kind of a weirdo” in favor of the hypothesis “he will do something horrible”.
There’s a common fallacy, which you may not have committed, but which your comment as written seemed to me to evoke. Logically it’s equivalent to base rate neglect. In conversation, it’s often triggered like this: a nontrivial value for P(A|B) is given, but the probability P(A|~B) is not mentioned. The listener doesn’t have a good estimation of P(A) or P(B), and he doesn’t think to ask; instead the high value of P(A|B) makes him think B is a good predictor of A, which is a fallacy. (Here, A=be called a weirdo, B=commit horrible deed.)
I’m pretty sure such effects are not linearly additive.
I’m not saying they’re linear or otherwise well-behaved, but I’m pretty sure they are all generally additive in the sense that in any given combination, if you increase any of the factors, the total also increases.
Notice how media narratives tend to become either “I always knew he was up to no good” or “I’d never have thought he would do something like that”, but you almost never hear something in the middle. I’m even having trouble finding a concise wording for a middle case other than “meh”.
It may also be that “I’d never have thought he’d do that” is the middle, the default way in which people think of anyone they have no reason to specially suspect. After all, I don’t expect a random stranger on the street to suddenly commit a horrible deed; why should I expect it more of an acquaintance unless there are concrete warning signs, which would make me say “I always knew he was up to no good”.
The true opposite of “I always knew...” would on this view be like Harry’s reaction to Hermione confessing attempted murder: “I don’t believe she did it, it’s a priori so improbable there must be another explanation or very special circumstances”. However, when the media has concluded someone has committed a horrible deed and is morally culpable, of course you won’t hear many people saying this to the media, even if they think so privately.
you should also consider the amount of cases where people said the same “he’s kind of a weirdo”, but that person did not go on to do something horrible. And also the amount of cases where people did not say it, and yet the person did something horrible. All three are necessary to calculating the strength of the evidence “people say he’s kind of a weirdo” in favor of the hypothesis “he will do something horrible”
Well, yeah, I agree, but I wasn’t trying to do that. At least I don’t think I was, and if it’s an implied assumption in what I said I don’t see it.
My original comment just said that I suspect many of the “I had no suspicion” after-crime statements are false (consciously or not), and was based mostly on how I suspect people’s brains might react, not on the rates of horrible acts.
My second comment I think said the same thing your quote above does, except adding that I also suspect a certain ordering of rates. But as I said in my first comment, I don’t have the actual rates and I believe they’re hard to obtain, so it’s just a suspicion.
After all, I don’t expect a random stranger on the street to suddenly commit a horrible deed; why should I expect it more of an acquaintance unless there are concrete warning signs, which would make me say “I always knew he was up to no good”.
The true opposite of “I always knew...” would on this view be like Harry’s reaction to Hermione confessing attempted murder: “I don’t believe she did it, it’s a priori so improbable there must be another explanation or very special circumstances”.
That’s true. I guess there are just very few people with this kind of reasoning (à la the Wizengamot); once they heard it happened, most probably take it for granted it was so, and they have only the “knew it all the time”/“didn’t see it coming” alternatives.
After all, I don’t expect a random stranger on the street to suddenly commit a horrible deed; why should I expect it more of an acquaintance unless there are concrete warning signs, which would make me say “I always knew he was up to no good”.
For a random stranger you have only the base rate to go on, you’ve got no other evidence. (Though for specific strangers you might have stuff like “he looks like a mobster” or “I’m in a dangerous neighborhood”, or maybe “he’s black and wears a hoodie”, which are a bit different as signs go.)
My claim is not quite that “weird people murder more often”. Instead, I suspect that “of the people who murder, a big majority were not stable/calm enough before and did give signs before”, and many if not most of the cases of people claiming there were no signs are because they just forget or ignore those signs.
(The two sentences are different if it so happens that there are very few people that give no signs, enough so that the fraction of them who do horrible things is less than the fraction of those who did give signs. Which I believe unlikely but not quite impossible.)
In some cases, people who commit major violence have a history of minor violence.
However, another possibility is that even people who commit major violence have people they like and/or want to please, and behave better in some contexts than in others.
I wonder how much of that is a “don’t speak bad of the dead” reflex, or “nobody could have seen it, so it’s not my fault I didn’t”, or even just “I’m such a good & loving friend/relative I didn’t see anything wrong with him”.
I’m sure there are cases that really came out of the blue, but I also have a nagging feeling that if you could interview the same people before the something horrible, and do it from an insider point of view (i.e., a question asked by another friend of the interviewee rather than by a reporter), a lot of answers would be of the “he’s kind of a weirdo” type.
Now update on the amount of people who call somebody “a weirdo” who does not end up murdering anyone. And add the negative halo effect, and fundamental attribution fallacy, from knowing in hindsight that the person you’re talking about has recently murdered someone.
As I said, I don’t really have any real evidence, and I believe it’d be very hard to collect. That said:
I’m not quite sure I understand what you mean by this. Let H=(did something horrible), S=(really suspicious), W=(just a bit creepy, weird, etc.).
I suspect that (H & S) > (H & W & !S) > (H & !W & !S) and that 1 > H/S >> H/(W & !S) >> H/(!W & !S). All fractions are low, but I’m not sure what you mean to say by that.
I’m pretty sure such effects are not linearly additive. Especially when there’s a conflict (friend/non-hated-family, did something bad), I don’t think you can determine the result just by logic, you have to see what people actually do.
Notice how media narratives tend to become either “I always knew he was up to no good” or “I’d never have thought he would do something like that”, but you almost never hear something in the middle. I’m even having trouble finding a concise wording for a middle case other than “meh”.
I’m sure the media has a lot to do with that, showing just the witnesses with the most “interesting” story, but I’m almost sure people also do this more-or-less automatically in their heads.
You said,
What I meant was: you should also consider the amount of cases where people said the same “he’s kind of a weirdo”, but that person did not go on to do something horrible. And also the amount of cases where people did not say it, and yet the person did something horrible. All three are necessary to calculating the strength of the evidence “people say he’s kind of a weirdo” in favor of the hypothesis “he will do something horrible”.
There’s a common fallacy, which you may not have committed, but which your comment as written seemed to me to evoke. Logically it’s equivalent to base rate neglect. In conversation, it’s often triggered like this: a nontrivial value for P(A|B) is given, but the probability P(A|~B) is not mentioned. The listener doesn’t have a good estimation of P(A) or P(B), and he doesn’t think to ask; instead the high value of P(A|B) makes him think B is a good predictor of A, which is a fallacy. (Here, A=be called a weirdo, B=commit horrible deed.)
I’m not saying they’re linear or otherwise well-behaved, but I’m pretty sure they are all generally additive in the sense that in any given combination, if you increase any of the factors, the total also increases.
It may also be that “I’d never have thought he’d do that” is the middle, the default way in which people think of anyone they have no reason to specially suspect. After all, I don’t expect a random stranger on the street to suddenly commit a horrible deed; why should I expect it more of an acquaintance unless there are concrete warning signs, which would make me say “I always knew he was up to no good”.
The true opposite of “I always knew...” would on this view be like Harry’s reaction to Hermione confessing attempted murder: “I don’t believe she did it, it’s a priori so improbable there must be another explanation or very special circumstances”. However, when the media has concluded someone has committed a horrible deed and is morally culpable, of course you won’t hear many people saying this to the media, even if they think so privately.
Well, yeah, I agree, but I wasn’t trying to do that. At least I don’t think I was, and if it’s an implied assumption in what I said I don’t see it.
My original comment just said that I suspect many of the “I had no suspicion” after-crime statements are false (consciously or not), and was based mostly on how I suspect people’s brains might react, not on the rates of horrible acts.
My second comment I think said the same thing your quote above does, except adding that I also suspect a certain ordering of rates. But as I said in my first comment, I don’t have the actual rates and I believe they’re hard to obtain, so it’s just a suspicion.
That’s true. I guess there are just very few people with this kind of reasoning (à la the Wizengamot); once they heard it happened, most probably take it for granted it was so, and they have only the “knew it all the time”/“didn’t see it coming” alternatives.
For a random stranger you have only the base rate to go on, you’ve got no other evidence. (Though for specific strangers you might have stuff like “he looks like a mobster” or “I’m in a dangerous neighborhood”, or maybe “he’s black and wears a hoodie”, which are a bit different as signs go.)
My claim is not quite that “weird people murder more often”. Instead, I suspect that “of the people who murder, a big majority were not stable/calm enough before and did give signs before”, and many if not most of the cases of people claiming there were no signs are because they just forget or ignore those signs.
(The two sentences are different if it so happens that there are very few people that give no signs, enough so that the fraction of them who do horrible things is less than the fraction of those who did give signs. Which I believe unlikely but not quite impossible.)
In some cases, people who commit major violence have a history of minor violence.
However, another possibility is that even people who commit major violence have people they like and/or want to please, and behave better in some contexts than in others.