Suppose people’s probability of solving a task is uniformly distributed between 0 and 1. That’s a thin-tailed distribution.
Now consider their probability of correctly solving 2 tasks in a row. That will have a sort of triangular distribution, which has more positive skewness.
If you consider e.g. their probability of correctly solving 10 tasks in a row, then the bottom 93.3% of people will all have less than 50%, whereas e.g. the 99th percentile will have 90% chance of succeeding.
Conjunction is one of the two fundamental ways that tasks can combine, and it tends to make the tasks harder and rapidly make the upper tail do better than the lower tail, leading to an approximately-exponential element. Another fundamental way that tasks can combine is disjunction, which leads to an exponential in the opposite direction.
When you combine conjunctions and disjunctions, you get an approximately sigmoidal relationship. The location/x-axis-translation of this sigmoid depends on the task’s difficulty. And in practice, the “easy” side of this sigmoid can be automated or done quickly or similar, so really what matters is the “hard” side, and the hard side of a sigmoid is approximately exponential.
Is the following a fair paraphrasing of your main hypothesis? (I’m leaving out some subtleties with conjunctive successes, but please correct the model in that way if it’s relevant.):
“”″ Each deleterious mutation multiplies your probability of succeeding at a problem/thought by some constant. Let’s for simplicity say it’s 0.98 for all of them.
Then the expected number of successes per time for a person is proportional to 0.98^num_deleterious_mutations(person).
So the model would predict that when Person A had 10 less deleterious mutations than person B, they would on average accomplish 0.98^10 ~= 0.82 times as much in a given timeframe. ”″”
I think this model makes a lot of sense, thanks!
In itself I think it’s insufficient to explain how heavytailed human intelligence is—there were multiple cases where Einstein seems to have been able to solve problems multiple times faster than the next runner ups. But I think if you use this model in a learning setting where success means “better thinking algorithms” then if you have 10 fewer deleterious mutations it’s like having 1⁄0.82 longer training time, and there might also be compounding returns from having better thinking algorithms to getting more and richer updates to them.
Not sure whether this completely deconfuses me about how heavytailed human intelligence is, but it’s a great start.
I guess at least the heavytail is much less significant evidence for my hypothesis than I initially thought (though so far I still think my hypothesis is plausible).
Suppose people’s probability of solving a task is uniformly distributed between 0 and 1. That’s a thin-tailed distribution.
Now consider their probability of correctly solving 2 tasks in a row. That will have a sort of triangular distribution, which has more positive skewness.
If you consider e.g. their probability of correctly solving 10 tasks in a row, then the bottom 93.3% of people will all have less than 50%, whereas e.g. the 99th percentile will have 90% chance of succeeding.
Conjunction is one of the two fundamental ways that tasks can combine, and it tends to make the tasks harder and rapidly make the upper tail do better than the lower tail, leading to an approximately-exponential element. Another fundamental way that tasks can combine is disjunction, which leads to an exponential in the opposite direction.
When you combine conjunctions and disjunctions, you get an approximately sigmoidal relationship. The location/x-axis-translation of this sigmoid depends on the task’s difficulty. And in practice, the “easy” side of this sigmoid can be automated or done quickly or similar, so really what matters is the “hard” side, and the hard side of a sigmoid is approximately exponential.
Thanks!
Is the following a fair paraphrasing of your main hypothesis? (I’m leaving out some subtleties with conjunctive successes, but please correct the model in that way if it’s relevant.):
“”″
Each deleterious mutation multiplies your probability of succeeding at a problem/thought by some constant. Let’s for simplicity say it’s 0.98 for all of them.
Then the expected number of successes per time for a person is proportional to 0.98^num_deleterious_mutations(person).
So the model would predict that when Person A had 10 less deleterious mutations than person B, they would on average accomplish 0.98^10 ~= 0.82 times as much in a given timeframe.
”″”
I think this model makes a lot of sense, thanks!
In itself I think it’s insufficient to explain how heavytailed human intelligence is—there were multiple cases where Einstein seems to have been able to solve problems multiple times faster than the next runner ups. But I think if you use this model in a learning setting where success means “better thinking algorithms” then if you have 10 fewer deleterious mutations it’s like having 1⁄0.82 longer training time, and there might also be compounding returns from having better thinking algorithms to getting more and richer updates to them.
Not sure whether this completely deconfuses me about how heavytailed human intelligence is, but it’s a great start.
I guess at least the heavytail is much less significant evidence for my hypothesis than I initially thought (though so far I still think my hypothesis is plausible).