GPT4 can’t even do date arithmetic correctly. It’s superhuman in many ways, and dumb in many others. It is dumb in strategy, philosophy, game theory, self awareness, mathematics, arithmetic, and reasoning from first principles. It’s not clear that current scaling laws will be able to make GPTs human level in these skills. Even if it becomes human level, a lot of problems are NP. This allows effective utilization of an unaligned weak super-intelligence. Its path to strong super-intelligence and free replication seems far away. It took years from GPT3 to GPT4. GPT4 is not that much better. And these were all low hanging fruit. My prediction is that GPT5 will have less improvements. It will be similarly slow to get developed. Its improvements will be mostly in areas it is already good at, not in its inherent shortcomings. Most improvements will come from augmenting LLMs with tools. This will be significant, but it will importantly not enable strategic thinking or mathematical reasoning. Without these skills, it’s not an x-risk.
I am not as pessimistic about the future capabilities, and definitely not as sure as you are (hence this post), but I see what you describe as a possibility. Definitely there is a lot of overhang in terms of augmentation: https://www.oneusefulthing.org/p/it-is-starting-to-get-strange
GPT4 can’t even do date arithmetic correctly. It’s superhuman in many ways, and dumb in many others. It is dumb in strategy, philosophy, game theory, self awareness, mathematics, arithmetic, and reasoning from first principles. It’s not clear that current scaling laws will be able to make GPTs human level in these skills. Even if it becomes human level, a lot of problems are NP. This allows effective utilization of an unaligned weak super-intelligence. Its path to strong super-intelligence and free replication seems far away. It took years from GPT3 to GPT4. GPT4 is not that much better. And these were all low hanging fruit. My prediction is that GPT5 will have less improvements. It will be similarly slow to get developed. Its improvements will be mostly in areas it is already good at, not in its inherent shortcomings. Most improvements will come from augmenting LLMs with tools. This will be significant, but it will importantly not enable strategic thinking or mathematical reasoning. Without these skills, it’s not an x-risk.
I think I touched on these points, that some things are easy and others are hard for LLMs, in my other post, https://www.lesswrong.com/posts/S2opNN9WgwpGPbyBi/do-llms-dream-of-emergent-sheep
I am not as pessimistic about the future capabilities, and definitely not as sure as you are (hence this post), but I see what you describe as a possibility. Definitely there is a lot of overhang in terms of augmentation: https://www.oneusefulthing.org/p/it-is-starting-to-get-strange