The difference is, that if the teacher was aware of what he was doing, he wouldn’t do it. And if the child wasn’t aware of what she was doing, she would behave the same way. If the teacher had a little neurofeedback button that somehow would light up when he was upset, or rationalizing, being impulsive, or otherwise cuing him when he was not thinking the way he would like to think, his behavior would change (somewhat).
It’s the difference between saying that someone with autism doesn’t care about people vs. someone saying with autism cannot understand how other people are feeling, or saying someone with ADHD is lazy vs. they desperately want to work but can’t control attention, or saying someone with face-blindness just doesn’t care about faces. (That’s what someone without those disorders would think first, since the behavior that the disordered person exhibits matches what they would do if they didn’t care).
motivation and intention do not have to be the same at all
I agree, they don’t have to be the same. I’m making the case that small instances of real difference, coupled with poor modeling of other people, enhances and exaggerates the perception that they are not the same, and that for smart people this is particularly bad because everyone around them is just kinda globally worse off on every dimension...and because of typical-mind fallacy the smart person will then assume everyone’s just kinda alien and terrible in their intentions rather than just slightly worse at carrying intentions out.
When I’m dealing with someone I know well who is “normal’ and I see behavior 6 happening in a situation where I would have done 2+3= behavior 5, I model the other person as accidentally doing 2x3=6. Under typical mind fallacy, I would assume that they had similar minds (2+3) but different behavior (5) and conclude that those people just don’t care about equal signs I am so very alone and that’s the trap to avoid.
The difference is, that if the teacher was aware of what he was doing, he wouldn’t do it.
Eh, no, I don’t think so. I’m not buying into the “if only people were more self-aware, they would be a lot nicer” theory. Especially with “it’s not his fault, he just doesn’t know any better” overtones.
because of typical-mind fallacy the smart person will then assume everyone’s just kinda alien and terrible in their intentions rather than just slightly worse at carrying intentions out.
No, I still don’t think so. A smart person should be able to figure out Hanlon’s Razor. I don’t know any smart kids who actually had the “all of them are as smart as me, just much more mean” attitude towards others.
I model the other person as accidentally doing 2x3=6.
That’s a weird model. If it’s “accidental”, do you the predict that the next time it will be 4, or 7, or 11, or something random?
My usual starting model for other people is “What are their incentives? What are they trying to do to the best of their ability?” and only in the fairly rare cases of a major mismatch, I start to consider the possibilities that these people might be really clueless or really mean or something like that.
I would predict they’ll do whatever fails mode they’ve done in the past, or do the failures which i barely catch myself from doing.
Are you sure that you don’t first look at the behavior and then calculate an incentive map? (Which obviously will fit rather well since it is post hoc?) ((Because that’s the failure mode most people fall into))(((and doesn’t your last paragraph depict a thought process which is the exact opposite of Hanlons razor?)))
Are you sure that you don’t first look at the behavior and then calculate an incentive map?
Well, both. Normally I estimate (and update) the model(s) in the middle of an interaction. Before I have no data and have to fall back on priors, and after I have no need for a model.
Are you saying there are, um, methodological problems with this approach?
doesn’t your last paragraph depict a thought process which is the exact opposite of Hanlons razor?
Doesn’t look like that to me. The opposite of Hanlon’s Razor is “I don’t understand her therefore she is trying to hurt me”. I’m starting by trying to figure out what the person wants and only if I fail I start to consider that she might be clueless (as Hanlon’s Razor would suggest) or mean (in case Hanlon’s Razor is wrong here).
The difference is, that if the teacher was aware of what he was doing, he wouldn’t do it. And if the child wasn’t aware of what she was doing, she would behave the same way. If the teacher had a little neurofeedback button that somehow would light up when he was upset, or rationalizing, being impulsive, or otherwise cuing him when he was not thinking the way he would like to think, his behavior would change (somewhat).
It’s the difference between saying that someone with autism doesn’t care about people vs. someone saying with autism cannot understand how other people are feeling, or saying someone with ADHD is lazy vs. they desperately want to work but can’t control attention, or saying someone with face-blindness just doesn’t care about faces. (That’s what someone without those disorders would think first, since the behavior that the disordered person exhibits matches what they would do if they didn’t care).
I agree, they don’t have to be the same. I’m making the case that small instances of real difference, coupled with poor modeling of other people, enhances and exaggerates the perception that they are not the same, and that for smart people this is particularly bad because everyone around them is just kinda globally worse off on every dimension...and because of typical-mind fallacy the smart person will then assume everyone’s just kinda alien and terrible in their intentions rather than just slightly worse at carrying intentions out.
When I’m dealing with someone I know well who is “normal’ and I see behavior 6 happening in a situation where I would have done 2+3= behavior 5, I model the other person as accidentally doing 2x3=6. Under typical mind fallacy, I would assume that they had similar minds (2+3) but different behavior (5) and conclude that those people just don’t care about equal signs I am so very alone and that’s the trap to avoid.
Eh, no, I don’t think so. I’m not buying into the “if only people were more self-aware, they would be a lot nicer” theory. Especially with “it’s not his fault, he just doesn’t know any better” overtones.
No, I still don’t think so. A smart person should be able to figure out Hanlon’s Razor. I don’t know any smart kids who actually had the “all of them are as smart as me, just much more mean” attitude towards others.
That’s a weird model. If it’s “accidental”, do you the predict that the next time it will be 4, or 7, or 11, or something random?
My usual starting model for other people is “What are their incentives? What are they trying to do to the best of their ability?” and only in the fairly rare cases of a major mismatch, I start to consider the possibilities that these people might be really clueless or really mean or something like that.
I would predict they’ll do whatever fails mode they’ve done in the past, or do the failures which i barely catch myself from doing.
Are you sure that you don’t first look at the behavior and then calculate an incentive map? (Which obviously will fit rather well since it is post hoc?) ((Because that’s the failure mode most people fall into))(((and doesn’t your last paragraph depict a thought process which is the exact opposite of Hanlons razor?)))
Well, both. Normally I estimate (and update) the model(s) in the middle of an interaction. Before I have no data and have to fall back on priors, and after I have no need for a model.
Are you saying there are, um, methodological problems with this approach?
Doesn’t look like that to me. The opposite of Hanlon’s Razor is “I don’t understand her therefore she is trying to hurt me”. I’m starting by trying to figure out what the person wants and only if I fail I start to consider that she might be clueless (as Hanlon’s Razor would suggest) or mean (in case Hanlon’s Razor is wrong here).