So … smart people are worse than average at the task of evaluating whether or not smart people are worse than average at some generic task which requires intellectual labor to perform, and in fact smart people should be expected to be better than average at some generic task which requires intellectual labor to perform?
Isn’t the task of evaluating whether or not smart people are worse than average at some generic task which requires intellectual labor to perform, itself a task which requires intellectual labor to perform? So shouldn’t we expect them to be better at it?
I mentioned this as a bias that a priori very much seems like it should exist. This does not mean smart people can’t get the right answer anyway, by using their superior skills. (Or because they have other biases in favor of intelligence, e.g. self-serving biases.) Maybe they can, I wouldn’t necessarily make strong predictions about it.
Consider the hypothetical collider from the perspective of somebody in the middle; people in the upper right quadrant have cut them off, and they have cut people in the lower left quadrant. That is, they should observe exactly the same phenomenon in the people they know. Likewise, a below average person. Thus, the hypothetical collider should lead every single person to observe the same inverted relationship between IQ and black swan awareness; the effect isn’t limited to the upper right quadrant. That is, if smart people are more likely to believe IQ isn’t particularly important than less-smart people, this belief cannot arise from the hypothetical collider model, which predicts the same beliefs among all groups of people.
I think it depends on the selection model.
The simulation in the main post assumed a selection model of black_swan_awareness + g > 3. If we instead change that to black_swan_awareness—g**2 * 0.5 > 1, we get the following:
This seems to exhibit a positive correlation.
For convenience, here’s the simulation code in case you want to play around with it:
import numpy as np
import matplotlib.pyplot as plt
N = 10000
g = np.random.normal(0, 1, N)
black_swan_awareness = np.random.normal(0, np.sqrt(1-0.3**2), N) + 0.3 * g
selected = black_swan_awareness - g**2 * 0.5 > 1 #g + black_swan_awareness > 3
iq = g * 15 + 100
plt.scatter(iq[~selected], black_swan_awareness[~selected], s=0.5, label="People Taleb fanboys avoid")
plt.scatter(iq[selected], black_swan_awareness[selected], s=0.5, label="People Taleb fanboys hangs out with")
plt.legend()
plt.xlabel("IQ")
plt.ylabel("black swan awareness")
plt.title("Hypothetical collider")
plt.show()
Yes, it does depend on the selection model; my point was that the selection model you were using made the same predictions for everybody, not just Taleb. And yes, changing the selection model changes the results.
However, in both cases, you’ve chosen the selection model that supports your conclusions, whether intentionally or accidentally; in the post, you use a selection model that suggests Taleb would see a negative association. Here, in response to my observation that that selection model predicts -everybody- would see a negative association, you’ve responded with what amounts to an implication that the selection model everybody else uses produces a positive association. I observe that, additionally, you’ve changed the labeling to imply that this selection model doesn’t apply to Taleb, and “smart people” generally, but rather their fanboys.
However, if Taleb used this selection model as well, the argument presented in the main post, based on the selection model, collapses.
Do you have an argument, or evidence, for why Taleb’s selection model should be the chosen selection model, and for why people who aren’t Taleb should use this selection model instead?
However, if Taleb used this selection model as well, the argument presented in the main post, based on the selection model, collapses.
No, if I use this modified selection model for Taleb, the argument survives. For instance, suppose he is 140 IQ − 2.67 sigma above average in g. That should mean that his selection expression should be black_swan_awareness - (g-2.67)**2 * 0.5 > 1. Putting this into the simulation gives the following results:
You have a simplification in your “black swan awareness” column which I don’t think it is appropriate to carry over; in particular you’d need to rewrite the equation entirely to deal with an anti-Taleb, who doesn’t believe in black swans at all. (It also needs to deal with the issue of repricocity; if somebody doesn’t hang out with you, you can’t hang out with them.)
You probably end up with a circle, the size of which determines what trends Taleb will notice; for the size of the apparent circle used for the fan, I think Taleb will notice a slight downward trend with 100-120 IQ people, followed by a general upward trend—so being slightly smart would be negatively correlated, but being very smart would be positively correlated. Note that the absolute smartest people—off on the far right of the distribution—will observe a positive correlation, albeit a weaker one. The people absolutely most into black swan awareness—generally at the top—likewise won’t tend to notice any strong trends, but it will tend to be a weaker positive correlation. The people who are both very into black swan and awareness, and also smart, will notice a slight downward correlation, but not that strong. People who are unusually black swan un-aware, and higher-but-not-highest IQ, whatever that means, will instead notice an upward correlation.
The net effect is that a randomly chosen “smart person” will notice a slight upward correlation.
Selection-induced correlation depends on the selection model used. It is valuable to point out that tailcalled implicitly assumes a specific selection model to generate a charitable interpretation of Taleb. But proposing more complex (/ less plausible for someone to employ in their life) models instead is not likely to yield a more believable result.
I mentioned this as a bias that a priori very much seems like it should exist. This does not mean smart people can’t get the right answer anyway, by using their superior skills. (Or because they have other biases in favor of intelligence, e.g. self-serving biases.) Maybe they can, I wouldn’t necessarily make strong predictions about it.
I think it depends on the selection model.
The simulation in the main post assumed a selection model of black_swan_awareness + g > 3. If we instead change that to black_swan_awareness—g**2 * 0.5 > 1, we get the following:
This seems to exhibit a positive correlation.
For convenience, here’s the simulation code in case you want to play around with it:
Yes, it does depend on the selection model; my point was that the selection model you were using made the same predictions for everybody, not just Taleb. And yes, changing the selection model changes the results.
However, in both cases, you’ve chosen the selection model that supports your conclusions, whether intentionally or accidentally; in the post, you use a selection model that suggests Taleb would see a negative association. Here, in response to my observation that that selection model predicts -everybody- would see a negative association, you’ve responded with what amounts to an implication that the selection model everybody else uses produces a positive association. I observe that, additionally, you’ve changed the labeling to imply that this selection model doesn’t apply to Taleb, and “smart people” generally, but rather their fanboys.
However, if Taleb used this selection model as well, the argument presented in the main post, based on the selection model, collapses.
Do you have an argument, or evidence, for why Taleb’s selection model should be the chosen selection model, and for why people who aren’t Taleb should use this selection model instead?
No, if I use this modified selection model for Taleb, the argument survives. For instance, suppose he is 140 IQ − 2.67 sigma above average in g. That should mean that his selection expression should be black_swan_awareness - (g-2.67)**2 * 0.5 > 1. Putting this into the simulation gives the following results:
You have a simplification in your “black swan awareness” column which I don’t think it is appropriate to carry over; in particular you’d need to rewrite the equation entirely to deal with an anti-Taleb, who doesn’t believe in black swans at all. (It also needs to deal with the issue of repricocity; if somebody doesn’t hang out with you, you can’t hang out with them.)
You probably end up with a circle, the size of which determines what trends Taleb will notice; for the size of the apparent circle used for the fan, I think Taleb will notice a slight downward trend with 100-120 IQ people, followed by a general upward trend—so being slightly smart would be negatively correlated, but being very smart would be positively correlated. Note that the absolute smartest people—off on the far right of the distribution—will observe a positive correlation, albeit a weaker one. The people absolutely most into black swan awareness—generally at the top—likewise won’t tend to notice any strong trends, but it will tend to be a weaker positive correlation. The people who are both very into black swan and awareness, and also smart, will notice a slight downward correlation, but not that strong. People who are unusually black swan un-aware, and higher-but-not-highest IQ, whatever that means, will instead notice an upward correlation.
The net effect is that a randomly chosen “smart person” will notice a slight upward correlation.
I’m not sure what you are saying, could you create a simulation or something?
Selection-induced correlation depends on the selection model used. It is valuable to point out that tailcalled implicitly assumes a specific selection model to generate a charitable interpretation of Taleb. But proposing more complex (/ less plausible for someone to employ in their life) models instead is not likely to yield a more believable result.
Did you mean to write your comment in response to ACrackedPot, rather than to me?