I hadn’t considered this argument, thanks for sharing it.
It seems to rest on this implicit piece of reasoning: (premise 1) If modelling human intelligence as a normal distribution, it’s statistically more probable that the most intelligent human will only be so by a small amount. (premise 2) One of the plausibly most intelligent humans was capable of doing much better than other highly intelligent humans in their field. (conclusion) It’s probable that past some threshold, small increases in intelligence lead to great increases in output quality.
It’s ambiguous what ‘intelligence’ refers to here if we decouple that word from the quality of insight one is capable of. Here’s a way of re-framing this conclusion to make it more quantifiable/discussable: “Past some threshold, as a system’s quality of insight increases, the optimization required (for evolution or a training process) to select for a system capable of greater insight decreases”.
The level this becomes true at would need to be higher than any AI’s so far, otherwise we would observe training processes easily optimizing these systems into superintelligences instead of loss curves stabilizing at some point above 0.
I feel uncertain whether there are conceptual reasons (priors) for this conclusion being true or untrue.
I’m also not confident that human intelligence is normally distributed in the upper limits, because I don’t expect there are known strong theoretical reasons to believe this.
Overall it seems at least a two digit probability given the plausibility of the premises.
I hadn’t considered this argument, thanks for sharing it.
It seems to rest on this implicit piece of reasoning:
(premise 1) If modelling human intelligence as a normal distribution, it’s statistically more probable that the most intelligent human will only be so by a small amount.
(premise 2) One of the plausibly most intelligent humans was capable of doing much better than other highly intelligent humans in their field.
(conclusion) It’s probable that past some threshold, small increases in intelligence lead to great increases in output quality.
It’s ambiguous what ‘intelligence’ refers to here if we decouple that word from the quality of insight one is capable of. Here’s a way of re-framing this conclusion to make it more quantifiable/discussable: “Past some threshold, as a system’s quality of insight increases, the optimization required (for evolution or a training process) to select for a system capable of greater insight decreases”.
The level this becomes true at would need to be higher than any AI’s so far, otherwise we would observe training processes easily optimizing these systems into superintelligences instead of loss curves stabilizing at some point above 0.
I feel uncertain whether there are conceptual reasons (priors) for this conclusion being true or untrue.
I’m also not confident that human intelligence is normally distributed in the upper limits, because I don’t expect there are known strong theoretical reasons to believe this.
Overall it seems at least a two digit probability given the plausibility of the premises.