This is a great post. Thanks for writing it! I think Figure 1 is quite compelling and thought provoking. I began writing a response, and then realized a lot of what I wanted to say has already been said by others, so I just noted where that was the case. I’ll focus on points of disagreement.
Summary: I think the basic argument of the post is well summarized in Figure 1, and by Vanessa Kosoy’s comment.
A high-level counter-argument I didn’t see others making:
I wasn’t entirely sure what was your argument that long-term planning ability saturates… I’ve seen this argued both based on complexity and chaos, and I think here it’s a bit of a mix of both.
Counter-argument to chaos-argument: It seems we can make meaningful predictions of many relevant things far into the future (e.g. that the sun’s remaining natural life-span is 7-8 billion years).
Counter-argument to complexity-argument: Increases in predictive ability can have highly non-linear returns, both in terms of planning depth and planning accuracy.
Depth: You often only need to be “one step ahead” of your adversary in order to defeat them and win the whole “prize” (e.g. of market or geopolitical dominance), e.g. if I can predict the weather one day further ahead, this could have a major impact in military strategy.
Accuracy: If you can make more accurate predictions about, e.g. how prices of assets will change, you can make a killing in finance.
High-level counter-arguments I would’ve made that Vanessa already made:
This argument proves too much: it suggests that there are not major differences in ability to do long-term planning that matter.
Humans have not reached the limits of predictive ability
Low-level counter-arguments:
RE Claim 1: Why would AI only have an advantage in IQ as opposed to other forms of intelligence / cognitive skill? No argument is provided.
(Argued by Jonathan Uesato) RE Claim 3: Scaling laws provide ~zero evidence that we are at the limit of “what can be achieved with a certain level of resources”.
This is a great post. Thanks for writing it! I think Figure 1 is quite compelling and thought provoking.
I began writing a response, and then realized a lot of what I wanted to say has already been said by others, so I just noted where that was the case. I’ll focus on points of disagreement.
Summary: I think the basic argument of the post is well summarized in Figure 1, and by Vanessa Kosoy’s comment.
A high-level counter-argument I didn’t see others making:
I wasn’t entirely sure what was your argument that long-term planning ability saturates… I’ve seen this argued both based on complexity and chaos, and I think here it’s a bit of a mix of both.
Counter-argument to chaos-argument: It seems we can make meaningful predictions of many relevant things far into the future (e.g. that the sun’s remaining natural life-span is 7-8 billion years).
Counter-argument to complexity-argument: Increases in predictive ability can have highly non-linear returns, both in terms of planning depth and planning accuracy.
Depth: You often only need to be “one step ahead” of your adversary in order to defeat them and win the whole “prize” (e.g. of market or geopolitical dominance), e.g. if I can predict the weather one day further ahead, this could have a major impact in military strategy.
Accuracy: If you can make more accurate predictions about, e.g. how prices of assets will change, you can make a killing in finance.
High-level counter-arguments I would’ve made that Vanessa already made:
This argument proves too much: it suggests that there are not major differences in ability to do long-term planning that matter.
Humans have not reached the limits of predictive ability
Low-level counter-arguments:
RE Claim 1: Why would AI only have an advantage in IQ as opposed to other forms of intelligence / cognitive skill? No argument is provided.
(Argued by Jonathan Uesato) RE Claim 3: Scaling laws provide ~zero evidence that we are at the limit of “what can be achieved with a certain level of resources”.