The mechanism described here seems fairly plausible prima facie, but it seems like there’s something self-undermining about it.
Suppose I am a smart person and I prefer to associate with smart people. As a result of this, I see evidence everywhere that being smart is anticorrelated with all the other things I care about. (And indeed that they are all anticorrelated with one another.) As a result of this, I come to have a “bias against general intelligence”.
What happens then? Aren’t I, with my newly-formed anti-g bias, likely to stop preferring to associate with smart people? And won’t this make those anticorrelations stop appearing?
Maybe not. For instance, maybe it was never that I prefer to associate with smart people, it’s that being a smart person myself I find that my job and hobbies tend to be smart-person jobs and hobbies, and the people I got to know at university are smart people, and so forth, so whatever my preferences being smart is going to give you an advantage in getting to know me, and then all the same mechanisms can operate.
Or maybe all my opinions are formed in my youth when I am hanging out with smart people, and then having developed an anti-g bias I stop hanging out with smart people, but by then my brain is too ossified for me to update my biases just because I’ve stopped seeing evidence for them.
Still, this does have at least a whiff of paradox about it. I am the more inclined to think so because it seems kinda implausible for a couple of other reasons.
First, there’s nothing very special about g here. If I tend to associate with smart people, socialists and sadomasochists[1], doesn’t this argument suggest that as well as starting to think that smart people tend to be sexually repressed fascists I should also start to think that socialists are sexually repressed and stupid, and that sadomasochists are stupid fascists? Shouldn’t this mechanism lead to a general disenchantment with all the characteristics one favours in one’s associates?
[1] Characteristics chosen for the alliteration and for being groups that a person might in fact tend to associate with. They aren’t a very good match for my actual social circles.
Second, where’s the empirical evidence? We’ve got the example of Taleb, but his anti-intellectual-ism could have lots of other causes besides this mechanism, and I can’t say I’ve noticed any general tendency for smart people to be more likely to think that intelligence is correlated with bad things. (For sure, sometimes they do. But sometimes stupid people do too.)
If I look within myself (an unreliable business, of course, but it seems like more or less the best I can do), I don’t find any particular sense that intelligent people are likely to be ruder or less generous or uglier or less fun to be with or (etc.), nor any of the other anticorrelations that this theory suggests I should have learned.
And there’s a sort of internal natural experiment I can look at. About 15 years ago I had a fairly major change of opinion: I deconverted from Christianity. So, this would suggest that (1) back then I should have expected clever people to be less religious, (2) now I should expect them to be more religious, (3) back then I should have expected religious people to be less clever, and (4) now I should expect them to be cleverer. (Where “religious” should maybe mean something like “religiously compatible with my past self”.) I think #1 and #3 are in fact at least slightly correct (and one of the things that led me to consider whether in fact some of my most deeply held beliefs were bullshit was the fact that on the whole the smartest people seemed rarely to be religious) but #2 and #4 are not. This isn’t outright inconsistent with the theory being advanced here—it could be, as I said above, that I’m just not as good at learning from experience as I used to be, because I’m older. But the simpler explanation, that very clever people do tend to be less religious (or at least systematically seem so to me for some reason), seems preferable to me.
There’s a pretty famous paper showing that math and verbal intelligence trade off against each other. It was done on students at a mid tier college; the actual cause was that students who were good at both went to a better school. So the effect definitely exists.
That said, I’m not convinced it’s the dominant cause here. Being known for being aggressive and mean while blocking people who politely disagree pollutes your data in all kinds of ways.
Suppose I am a smart person and I prefer to associate with smart people. As a result of this, I see evidence everywhere that being smart is anticorrelated with all the other things I care about. (And indeed that they are all anticorrelated with one another.) As a result of this, I come to have a “bias against general intelligence”.
What happens then? Aren’t I, with my newly-formed anti-g bias, likely to stop preferring to associate with smart people? And won’t this make those anticorrelations stop appearing?
First, I don’t necessarily know that this bias is strong enough to counteract other biases like self-serving biases, or to counteract smarter people’s better ability to understand the truth.
But secondly, I said that it creates the illusion of a tradeoff between g and good things. But g still has things that gives it advantages in the first place, since it’d still generally correlate to smart ideas outside of whatever thing one is valuing.
But also, I don’t know that people are being deliberate in associating with similarly intelligent people. It might also have happened as a result of stratification by job, interests, class, politics, etc.. Some people I know who have better experience with how social networks form across society claim that this is more what tends to happen, though I don’t know what they are basing it on.
First, there’s nothing very special about g here. If I tend to associate with smart people, socialists and sadomasochists[1], doesn’t this argument suggest that as well as starting to think that smart people tend to be sexually repressed fascists I should also start to think that socialists are sexually repressed and stupid, and that sadomasochists are stupid fascists? Shouldn’t this mechanism lead to a general disenchantment with all the characteristics one favours in one’s associates?
[1] Characteristics chosen for the alliteration and for being groups that a person might in fact tend to associate with. They aren’t a very good match for my actual social circles.
There’s the special thing that there is assortment on g. That special thing also applies to your examples, but it doesn’t apply to most variables.
Generally: points taken. On the last point: there may not exactly be assortment on other variables, but surely it’s true that people generally prefer to hang out with others who are kinder, more interesting, more fun to be with, more attractive, etc.
As you select for more variables, the collider relationship between any individual variable pair gets weaker because you can’t select as strongly. So there’s a limit to how many variables this effect can work for at once.
My argument (I think) bypasses this problem because 1. there’s a documented fairly strong assortment on intelligence, 2. I specifically limit the other variable to whichever one they personally particularly value.
I think I can reconcile your experiences with Taleb’s (and expand out the theory that tailcalled put forth). The crux of the extension is that relationships are two-way streets and almost everyone wants to spend time with people who are better than them in the domain of interest.
The consequence of this is that most people are equally constrained by who they want in their social circle and also the who wants them in their social circle. While most people would like to hang out with the super smart or super domain-competent (which would induce the negative correlation), those people are busy because everyone wants to hang out with them, and so they hang out with their neighbor instead. Since Taleb is extraordinary, almost everyone wants to hang out with him, so the distribution of people he hangs out with is equal to his preference criteria. For most people near the center of the cluster, the bidirectionality of relationships leads them to social circles that are, well, circle shaped. Taleb, in a fashion true to himself, is dealing with the tail end of a multivariate distribution.
Since normal people are dealing with social Circles instead of social Tails, they do not experience the negative correlation that Taleb does.
The standard example I’ve always seen for the “collider bias” is that we have a bunch of restaurants in our hypothetical city, and it seems like the better their food, the worse their drinks and visa-versa. This is (supposed to be) because places with bad food and drinks go out of business and there is a cap on effort that can be applied to food or drinks.
How would the self-defeating thing play in here? I don’t see yet why it shouldn’t, but I also don’t recognize a way for it to happen, either. Could you walk me through it?
I don’t think it would, in practice. One reason is that “bad/bad places go out of business” is a mechanism that doesn’t go via your preferences in the way that “you spend time with smart nice people” does. But if it did I think it would go like this.
You go to restaurants that have good food or good drinks or both. This induces an anticorrelation between food quality and drinks quality in the restaurants you go to. After a while you notice this. You care a lot about (let’s say) having really good food, and having got the idea that maybe having good drinks is somehow harmful to food quality you stop preferring restaurants with good drinks. Now you are just going to restaurants with good food, and not selecting on drinks quality, so the collider bias isn’t there any more (this is the bit that’s different if in fact there’s a separate selection that kills restaurants whose food and drink are both bad, which doesn’t correspond to anything in the interpersonal-relations scenario), so you decide you were wrong about the anticorrelation. So you start selecting on drink quality again, and the anticorrelation comes back. Repeat until bored or until you think of collider bias as an explanation for your observations.
If this is intended to contradict my argument for why tailcalled’s proposed mechanism is self-undermining, I don’t see it. Could you go into a bit more detail?
The mechanism described here seems fairly plausible prima facie, but it seems like there’s something self-undermining about it.
Suppose I am a smart person and I prefer to associate with smart people. As a result of this, I see evidence everywhere that being smart is anticorrelated with all the other things I care about. (And indeed that they are all anticorrelated with one another.) As a result of this, I come to have a “bias against general intelligence”.
What happens then? Aren’t I, with my newly-formed anti-g bias, likely to stop preferring to associate with smart people? And won’t this make those anticorrelations stop appearing?
Maybe not. For instance, maybe it was never that I prefer to associate with smart people, it’s that being a smart person myself I find that my job and hobbies tend to be smart-person jobs and hobbies, and the people I got to know at university are smart people, and so forth, so whatever my preferences being smart is going to give you an advantage in getting to know me, and then all the same mechanisms can operate.
Or maybe all my opinions are formed in my youth when I am hanging out with smart people, and then having developed an anti-g bias I stop hanging out with smart people, but by then my brain is too ossified for me to update my biases just because I’ve stopped seeing evidence for them.
Still, this does have at least a whiff of paradox about it. I am the more inclined to think so because it seems kinda implausible for a couple of other reasons.
First, there’s nothing very special about g here. If I tend to associate with smart people, socialists and sadomasochists[1], doesn’t this argument suggest that as well as starting to think that smart people tend to be sexually repressed fascists I should also start to think that socialists are sexually repressed and stupid, and that sadomasochists are stupid fascists? Shouldn’t this mechanism lead to a general disenchantment with all the characteristics one favours in one’s associates?
[1] Characteristics chosen for the alliteration and for being groups that a person might in fact tend to associate with. They aren’t a very good match for my actual social circles.
Second, where’s the empirical evidence? We’ve got the example of Taleb, but his anti-intellectual-ism could have lots of other causes besides this mechanism, and I can’t say I’ve noticed any general tendency for smart people to be more likely to think that intelligence is correlated with bad things. (For sure, sometimes they do. But sometimes stupid people do too.)
If I look within myself (an unreliable business, of course, but it seems like more or less the best I can do), I don’t find any particular sense that intelligent people are likely to be ruder or less generous or uglier or less fun to be with or (etc.), nor any of the other anticorrelations that this theory suggests I should have learned.
And there’s a sort of internal natural experiment I can look at. About 15 years ago I had a fairly major change of opinion: I deconverted from Christianity. So, this would suggest that (1) back then I should have expected clever people to be less religious, (2) now I should expect them to be more religious, (3) back then I should have expected religious people to be less clever, and (4) now I should expect them to be cleverer. (Where “religious” should maybe mean something like “religiously compatible with my past self”.) I think #1 and #3 are in fact at least slightly correct (and one of the things that led me to consider whether in fact some of my most deeply held beliefs were bullshit was the fact that on the whole the smartest people seemed rarely to be religious) but #2 and #4 are not. This isn’t outright inconsistent with the theory being advanced here—it could be, as I said above, that I’m just not as good at learning from experience as I used to be, because I’m older. But the simpler explanation, that very clever people do tend to be less religious (or at least systematically seem so to me for some reason), seems preferable to me.
There’s a pretty famous paper showing that math and verbal intelligence trade off against each other. It was done on students at a mid tier college; the actual cause was that students who were good at both went to a better school. So the effect definitely exists.
That said, I’m not convinced it’s the dominant cause here. Being known for being aggressive and mean while blocking people who politely disagree pollutes your data in all kinds of ways.
First, I don’t necessarily know that this bias is strong enough to counteract other biases like self-serving biases, or to counteract smarter people’s better ability to understand the truth.
But secondly, I said that it creates the illusion of a tradeoff between g and good things. But g still has things that gives it advantages in the first place, since it’d still generally correlate to smart ideas outside of whatever thing one is valuing.
But also, I don’t know that people are being deliberate in associating with similarly intelligent people. It might also have happened as a result of stratification by job, interests, class, politics, etc.. Some people I know who have better experience with how social networks form across society claim that this is more what tends to happen, though I don’t know what they are basing it on.
There’s the special thing that there is assortment on g. That special thing also applies to your examples, but it doesn’t apply to most variables.
Generally: points taken. On the last point: there may not exactly be assortment on other variables, but surely it’s true that people generally prefer to hang out with others who are kinder, more interesting, more fun to be with, more attractive, etc.
As you select for more variables, the collider relationship between any individual variable pair gets weaker because you can’t select as strongly. So there’s a limit to how many variables this effect can work for at once.
My argument (I think) bypasses this problem because 1. there’s a documented fairly strong assortment on intelligence, 2. I specifically limit the other variable to whichever one they personally particularly value.
I think I can reconcile your experiences with Taleb’s (and expand out the theory that tailcalled put forth). The crux of the extension is that relationships are two-way streets and almost everyone wants to spend time with people who are better than them in the domain of interest.
The consequence of this is that most people are equally constrained by who they want in their social circle and also the who wants them in their social circle. While most people would like to hang out with the super smart or super domain-competent (which would induce the negative correlation), those people are busy because everyone wants to hang out with them, and so they hang out with their neighbor instead. Since Taleb is extraordinary, almost everyone wants to hang out with him, so the distribution of people he hangs out with is equal to his preference criteria. For most people near the center of the cluster, the bidirectionality of relationships leads them to social circles that are, well, circle shaped. Taleb, in a fashion true to himself, is dealing with the tail end of a multivariate distribution.
Since normal people are dealing with social Circles instead of social Tails, they do not experience the negative correlation that Taleb does.
The standard example I’ve always seen for the “collider bias” is that we have a bunch of restaurants in our hypothetical city, and it seems like the better their food, the worse their drinks and visa-versa. This is (supposed to be) because places with bad food and drinks go out of business and there is a cap on effort that can be applied to food or drinks.
How would the self-defeating thing play in here? I don’t see yet why it shouldn’t, but I also don’t recognize a way for it to happen, either. Could you walk me through it?
I don’t think it would, in practice. One reason is that “bad/bad places go out of business” is a mechanism that doesn’t go via your preferences in the way that “you spend time with smart nice people” does. But if it did I think it would go like this.
You go to restaurants that have good food or good drinks or both. This induces an anticorrelation between food quality and drinks quality in the restaurants you go to. After a while you notice this. You care a lot about (let’s say) having really good food, and having got the idea that maybe having good drinks is somehow harmful to food quality you stop preferring restaurants with good drinks. Now you are just going to restaurants with good food, and not selecting on drinks quality, so the collider bias isn’t there any more (this is the bit that’s different if in fact there’s a separate selection that kills restaurants whose food and drink are both bad, which doesn’t correspond to anything in the interpersonal-relations scenario), so you decide you were wrong about the anticorrelation. So you start selecting on drink quality again, and the anticorrelation comes back. Repeat until bored or until you think of collider bias as an explanation for your observations.
Thanks for the explanation how it’s different; now I understand what the original post meant.
You can’t stop hanging out with yourself, and “yourself” is the most important biasing sample.
If this is intended to contradict my argument for why tailcalled’s proposed mechanism is self-undermining, I don’t see it. Could you go into a bit more detail?
Watch this.
mic drop