Is there a reason you are thinking of to expect that transition to happen at exactly the tail end of the distribution of modern human intelligence? There don’t seem, as far as I’m aware, to have been any similar transitions in the evolution of modern humans from our chimp-like ancestors. If you look at proxies, like stone tools from homo-habilis to modern humans you see very slow improvements that slowly, but exponentially, accelerate in the rate of development.
I suspect that most of that improvement, once cultural transition took off at all, happens because of the ways in which cultural/technological advancements feed into each other (in part due to economic gains meaning higher populations with better networks which means accelerated discovery which means more economic gains and higher better connected populations), and that is hard to disentagle from actual intelligence improvements. So I suppose its still possible that you could have these exponential progress in technology feeding itself while at the same time actual intelligence is hitting a transition to a regime of diminishing returns, and it would be hard to see the latter in the record.
Another decent proxy for intelligence is brain size, though. If intelligence wasn’t actually improving the investment in larger brains just wouldn’t pay off evolutionarily, so I expect that when we see brain size increases in the fossil record we are also seeing intelligence increasing at at least a similar rate. Are there transitions in the fossil record from fast to slow changes in brain size in our lineage? That wouldn’t demonstrate diminishing returns intelligence (could be diminishing returns in the use of intelligence relative to the other metabolic costs, which is different from just particular changes to genes just not impacting intelligence as much as in the past), but it would at least be consistent with it.
Anyway, I’m not entirely sure where to look for evidence of the transition you seem to expect. If such transitions were common in the past it would increase my credence in one in the near future. But apriori it seems unlikely to me that there is such a transition at exactly the tail of the modern human intelligence distribution.
I mostly expect you start getting more and more into sub-critical intelligence explosion dynamics when you exceed +6std more and more. (E.g. see second half of this other comment i wrote) I also expect very smart people will be able to better setup computer-augmented note organizing systems or maybe code narrow aligned AIs that might help them with their tasks (in a way it’s a lot more useful than current LLMs but hard to use for other people). But idk.
I’m not sure how big the difference between +6 and +6.3std actually is. I also might’ve confused the actual-competence vs genetical-potential scale. On the scale I used the drive/”how hard one is trying” also plays a big role.
I actually mostly expect this from seeing that intelligence is pretty heavitailed. E.g. alignment research capability seems incredibly heavitailed to me, though it might be hard to judge the differences in capability there if you’re not already one of the relatively few people who are good at alignment research. Another example is how Einstein managed to find general relativity where the combined rest of the world wouldn’t have been able to do it like that without more experimental evidence. I do not know why this is the case. It is (very?) surprising to me. Einstein didn’t even work on understanding and optimizing his mind. But yeah that’s how I guess.
Is there a reason you are thinking of to expect that transition to happen at exactly the tail end of the distribution of modern human intelligence? There don’t seem, as far as I’m aware, to have been any similar transitions in the evolution of modern humans from our chimp-like ancestors. If you look at proxies, like stone tools from homo-habilis to modern humans you see very slow improvements that slowly, but exponentially, accelerate in the rate of development.
I suspect that most of that improvement, once cultural transition took off at all, happens because of the ways in which cultural/technological advancements feed into each other (in part due to economic gains meaning higher populations with better networks which means accelerated discovery which means more economic gains and higher better connected populations), and that is hard to disentagle from actual intelligence improvements. So I suppose its still possible that you could have these exponential progress in technology feeding itself while at the same time actual intelligence is hitting a transition to a regime of diminishing returns, and it would be hard to see the latter in the record.
Another decent proxy for intelligence is brain size, though. If intelligence wasn’t actually improving the investment in larger brains just wouldn’t pay off evolutionarily, so I expect that when we see brain size increases in the fossil record we are also seeing intelligence increasing at at least a similar rate. Are there transitions in the fossil record from fast to slow changes in brain size in our lineage? That wouldn’t demonstrate diminishing returns intelligence (could be diminishing returns in the use of intelligence relative to the other metabolic costs, which is different from just particular changes to genes just not impacting intelligence as much as in the past), but it would at least be consistent with it.
Anyway, I’m not entirely sure where to look for evidence of the transition you seem to expect. If such transitions were common in the past it would increase my credence in one in the near future. But apriori it seems unlikely to me that there is such a transition at exactly the tail of the modern human intelligence distribution.
I mostly expect you start getting more and more into sub-critical intelligence explosion dynamics when you exceed +6std more and more. (E.g. see second half of this other comment i wrote) I also expect very smart people will be able to better setup computer-augmented note organizing systems or maybe code narrow aligned AIs that might help them with their tasks (in a way it’s a lot more useful than current LLMs but hard to use for other people). But idk.
I’m not sure how big the difference between +6 and +6.3std actually is. I also might’ve confused the actual-competence vs genetical-potential scale. On the scale I used the drive/”how hard one is trying” also plays a big role.
I actually mostly expect this from seeing that intelligence is pretty heavitailed. E.g. alignment research capability seems incredibly heavitailed to me, though it might be hard to judge the differences in capability there if you’re not already one of the relatively few people who are good at alignment research. Another example is how Einstein managed to find general relativity where the combined rest of the world wouldn’t have been able to do it like that without more experimental evidence.
I do not know why this is the case. It is (very?) surprising to me. Einstein didn’t even work on understanding and optimizing his mind. But yeah that’s how I guess.