Consider two models: In one model, the difference in intelligence between Einstein and a village idiot are tiny, but productivity in the sciences is highly sensitive to intelligence in such a way that a small number of slightly more intelligent people have a disproportionate impact. In the other model, the difference between Einstein and a village idiot are vast, and a scientist’s impact is roughly proportional to their intelligence. How can you distinguish between these two models? You can’t, because they aren’t actually different models; they’re the same model using different parametrizations for intelligence.
The reason Eliezer Yudkowsky makes these arguments is because he wants to make certain points about what happens when you extrapolate to much higher intelligence. However, you cannot make even qualitative points without understanding how things are parametrized. For example, in general relativity it is possible for points to appear to be singularities under one coordinate system but behave perfectly smoothly in another. Thus I am unsatisfied at how you sweep the issue of how ability is defined under the rug.
Consider a third model in which one in a dozen evenly matched scientists picks the direction and approach that actually leads to results while the rest happen not to find that particular gem.
It sure isn’t the only thing, but random chance does play a part.
Consider two models: In one model, the difference in intelligence between Einstein and a village idiot are tiny, but productivity in the sciences is highly sensitive to intelligence in such a way that a small number of slightly more intelligent people have a disproportionate impact. In the other model, the difference between Einstein and a village idiot are vast, and a scientist’s impact is roughly proportional to their intelligence. How can you distinguish between these two models? You can’t, because they aren’t actually different models; they’re the same model using different parametrizations for intelligence.
The reason Eliezer Yudkowsky makes these arguments is because he wants to make certain points about what happens when you extrapolate to much higher intelligence. However, you cannot make even qualitative points without understanding how things are parametrized. For example, in general relativity it is possible for points to appear to be singularities under one coordinate system but behave perfectly smoothly in another. Thus I am unsatisfied at how you sweep the issue of how ability is defined under the rug.
Consider a third model in which one in a dozen evenly matched scientists picks the direction and approach that actually leads to results while the rest happen not to find that particular gem.
It sure isn’t the only thing, but random chance does play a part.