To some extent yes speed can compensate for intelligence but this isn’t really related to the question of FOOM.
In theory, if we have an AGI which is human level but 1000x faster, it might be able to perform at the level of 1000 humans rather than a human from the year 3000. If we have a giant population of AGIs such that we can replicate the entire edifice of human science but running at 1000x faster, then sure. In practice though by Amdahl’s law such a speed increase would just move the bottleneck to something else (probably running experiments/gathering data in the real world) so the speedup would be much less.
The general point I agree with though that we don’t need foom or RSI for x-risk.
If we have a giant population of AGIs such that we can replicate the entire edifice of human science but running at 1000x faster, then sure.
That’s what I meant, serial speedup of 1000x, and separately from that a sufficient population. Assuming 6 hours a day of intensive work for humans, 5 days a week, there is a 5.6x speedup from not needing to rest. With 3⁄4 words per token, a 1000x speedup given no need to rest requires generation speed of 240 tokens/s. LLMs can do about 20-100 tokens/s when continuing a single prompt. Response latency is already a problem in practice, so it’s likely to improve.
To some extent yes speed can compensate for intelligence but this isn’t really related to the question of FOOM.
In theory, if we have an AGI which is human level but 1000x faster, it might be able to perform at the level of 1000 humans rather than a human from the year 3000. If we have a giant population of AGIs such that we can replicate the entire edifice of human science but running at 1000x faster, then sure. In practice though by Amdahl’s law such a speed increase would just move the bottleneck to something else (probably running experiments/gathering data in the real world) so the speedup would be much less.
The general point I agree with though that we don’t need foom or RSI for x-risk.
That’s what I meant, serial speedup of 1000x, and separately from that a sufficient population. Assuming 6 hours a day of intensive work for humans, 5 days a week, there is a 5.6x speedup from not needing to rest. With 3⁄4 words per token, a 1000x speedup given no need to rest requires generation speed of 240 tokens/s. LLMs can do about 20-100 tokens/s when continuing a single prompt. Response latency is already a problem in practice, so it’s likely to improve.