I find the idea that intelligence is less useful for sufficiently complex systems or sufficiently long time frames interesting. Or at least the kind of intelligence that helps you make predictions. My intuition is that there is something there, although it’s not quite the thing you’re describing.
I agree that the optimal predictability of the future decays as you try to predict farther into the future. If the thing you’re trying to predict in the technical sense, you can make this into a precise statement.
I disagree that the skill needed to match this optimum typically has a peak. Even for extremely chaotic systems, it is typically possible to find some structure to it that is not immediately obvious. Heuristics are sometimes more useful than precise calculations, but building good heuristics and know how to use them is itself a skill that improves with intelligence. I suspect that the skill needed to reach optimum usually monotonically increases with longer prediction times or more complexity.
Instead, the peak appears in the marginal benefit of additional intelligence. Consider the difference in prediction ability between two different intelligences. At small time / low complexity, there is little difference because both of them are very good at making predictions. A large times / complexity, the difference is again small because, even though neither is at optimum, the small size of the optimum limits how far apart they can be. The biggest difference can be seen at the intermediate scales, while there are still good predictions to be made, but they are hard to make.
As long as there are some other skills relevant for most jobs that intelligence trades off against, we would expect the strongest incentives for intelligence to occur in the jobs where the marginal benefit of additional intelligence is the largest.
I find the idea that intelligence is less useful for sufficiently complex systems or sufficiently long time frames interesting. Or at least the kind of intelligence that helps you make predictions. My intuition is that there is something there, although it’s not quite the thing you’re describing.
I agree that the optimal predictability of the future decays as you try to predict farther into the future. If the thing you’re trying to predict in the technical sense, you can make this into a precise statement.
I disagree that the skill needed to match this optimum typically has a peak. Even for extremely chaotic systems, it is typically possible to find some structure to it that is not immediately obvious. Heuristics are sometimes more useful than precise calculations, but building good heuristics and know how to use them is itself a skill that improves with intelligence. I suspect that the skill needed to reach optimum usually monotonically increases with longer prediction times or more complexity.
Instead, the peak appears in the marginal benefit of additional intelligence. Consider the difference in prediction ability between two different intelligences. At small time / low complexity, there is little difference because both of them are very good at making predictions. A large times / complexity, the difference is again small because, even though neither is at optimum, the small size of the optimum limits how far apart they can be. The biggest difference can be seen at the intermediate scales, while there are still good predictions to be made, but they are hard to make.
A picture of how I think this works, similar to Figure 1, is linked here: https://drive.google.com/file/d/1-1xfsBWxX7VDs0ErEAc716TdypRUdgt-/view?usp=sharing
As long as there are some other skills relevant for most jobs that intelligence trades off against, we would expect the strongest incentives for intelligence to occur in the jobs where the marginal benefit of additional intelligence is the largest.