These diminishing returns to intelligence are of immense importance for forecasting AI risks since whether FOOM is possible or not depends heavily on the returns to increasing intelligence in the range around the human level.
An AGI that can perform on merely human level is still faster than humans. A 1000x increase in speed (accounting for no need to rest) is sufficient to deliver research from the year 3000 within a year, for anything that’s theoretical or doesn’t require too much compute for simulations and such to develop. That’s FOOM enough, the distinction from an even greater disruption won’t matter in terms of AI risk.
To some extent yes speed can compensate for intelligence but this isn’t really related to the question of FOOM.
In theory, if we have an AGI which is human level but 1000x faster, it might be able to perform at the level of 1000 humans rather than a human from the year 3000. If we have a giant population of AGIs such that we can replicate the entire edifice of human science but running at 1000x faster, then sure. In practice though by Amdahl’s law such a speed increase would just move the bottleneck to something else (probably running experiments/gathering data in the real world) so the speedup would be much less.
The general point I agree with though that we don’t need foom or RSI for x-risk.
If we have a giant population of AGIs such that we can replicate the entire edifice of human science but running at 1000x faster, then sure.
That’s what I meant, serial speedup of 1000x, and separately from that a sufficient population. Assuming 6 hours a day of intensive work for humans, 5 days a week, there is a 5.6x speedup from not needing to rest. With 3⁄4 words per token, a 1000x speedup given no need to rest requires generation speed of 240 tokens/s. LLMs can do about 20-100 tokens/s when continuing a single prompt. Response latency is already a problem in practice, so it’s likely to improve.
An AGI that can perform on merely human level is still faster than humans. A 1000x increase in speed (accounting for no need to rest) is sufficient to deliver research from the year 3000 within a year, for anything that’s theoretical or doesn’t require too much compute for simulations and such to develop. That’s FOOM enough, the distinction from an even greater disruption won’t matter in terms of AI risk.
To some extent yes speed can compensate for intelligence but this isn’t really related to the question of FOOM.
In theory, if we have an AGI which is human level but 1000x faster, it might be able to perform at the level of 1000 humans rather than a human from the year 3000. If we have a giant population of AGIs such that we can replicate the entire edifice of human science but running at 1000x faster, then sure. In practice though by Amdahl’s law such a speed increase would just move the bottleneck to something else (probably running experiments/gathering data in the real world) so the speedup would be much less.
The general point I agree with though that we don’t need foom or RSI for x-risk.
That’s what I meant, serial speedup of 1000x, and separately from that a sufficient population. Assuming 6 hours a day of intensive work for humans, 5 days a week, there is a 5.6x speedup from not needing to rest. With 3⁄4 words per token, a 1000x speedup given no need to rest requires generation speed of 240 tokens/s. LLMs can do about 20-100 tokens/s when continuing a single prompt. Response latency is already a problem in practice, so it’s likely to improve.