Maybe we could call this something like “Strategic Determinism”
I think one more precise claim I could understand might be: 1. The main bottleneck to AI advancement is “strategic thinking” 2. There’s a decent amount of uncertainty on when or if “strategic thinking” will be “solved” 3. Human actions might have a lot of influence over (2). Depending on what choices humans make, strategic thinking might be solved sooner or much later. 4. Shortly after “strategic thinking” is solved, we gain a lot of certainty on what future trajectory will be like. As in, the fate of humanity is sort of set by this point, and further human actions won’t be able to change it much. 5. “Strategic thinking” will lead to a very large improvement in potential capabilities. One main reason is that it would lead to recursive self-improvement. If there is one firm that has sole access to an LLM with “strategic thinking”, it is likely to develop a decisive strategic advantage.
I think personally, such a view seems too clean to me. 1. I expect that there will be a lot of time where LLMs get better at different aspects of strategic thinking, and this helps to limited extents. 2. I expect that better strategy will have limited gains in LLM capabilities, for some time. The strategy might suggest better LLM improvement directions, but these ideas won’t actually help that much. Maybe a firm with a 10% better strategist would be able to improve it’s effectiveness by 5% per year or something. 3. I think there are could be a bunch of worlds where we have “idiot savants” who are amazing at some narrow kinds of tasks (coding, finance), but have poor epistemics in many ways we really care about. These will make tons of money, despite being very stupid in important ways. 4. I expect that many of the important gains that would come from “great strategy” would be received in other ways, like narrow RL. A highly optimized-with-RL coding system wouldn’t benefit that much with certain “strategy” benefits. 5. A lot of the challenges for things like “making a big codebase” aren’t to do with “being a great strategist”, but more with narrower problems like “how to store a bunch of context in memory” or “basic reasoning processes for architecture decisions specifically”
“I see some risk that strategic abilities will be the last step in the development of AI that is powerful enough to take over the world.”
Just fyi—I feel like this is similar to what others have said. Most recently, benwr had a post here: https://www.lesswrong.com/posts/5rMwWzRdWFtRdHeuE/not-all-capabilities-will-be-created-equal-focus-on?commentId=uGHZBZQvhzmFTrypr#uGHZBZQvhzmFTrypr
Maybe we could call this something like “Strategic Determinism”
I think one more precise claim I could understand might be:
1. The main bottleneck to AI advancement is “strategic thinking”
2. There’s a decent amount of uncertainty on when or if “strategic thinking” will be “solved”
3. Human actions might have a lot of influence over (2). Depending on what choices humans make, strategic thinking might be solved sooner or much later.
4. Shortly after “strategic thinking” is solved, we gain a lot of certainty on what future trajectory will be like. As in, the fate of humanity is sort of set by this point, and further human actions won’t be able to change it much.
5. “Strategic thinking” will lead to a very large improvement in potential capabilities. One main reason is that it would lead to recursive self-improvement. If there is one firm that has sole access to an LLM with “strategic thinking”, it is likely to develop a decisive strategic advantage.
I think personally, such a view seems too clean to me.
1. I expect that there will be a lot of time where LLMs get better at different aspects of strategic thinking, and this helps to limited extents.
2. I expect that better strategy will have limited gains in LLM capabilities, for some time. The strategy might suggest better LLM improvement directions, but these ideas won’t actually help that much. Maybe a firm with a 10% better strategist would be able to improve it’s effectiveness by 5% per year or something.
3. I think there are could be a bunch of worlds where we have “idiot savants” who are amazing at some narrow kinds of tasks (coding, finance), but have poor epistemics in many ways we really care about. These will make tons of money, despite being very stupid in important ways.
4. I expect that many of the important gains that would come from “great strategy” would be received in other ways, like narrow RL. A highly optimized-with-RL coding system wouldn’t benefit that much with certain “strategy” benefits.
5. A lot of the challenges for things like “making a big codebase” aren’t to do with “being a great strategist”, but more with narrower problems like “how to store a bunch of context in memory” or “basic reasoning processes for architecture decisions specifically”