I agree that the actual speed improvement for the optimized code can’t go to infinity, since you can only optimize code so much. This is an example of diminishing returns due to the task itself having a bound. I think this general argument (that the task itself has a bound in how well you can do) is a central part of your confidence that diminishing returns will be ubiquitous.
This is where I think we break. How many dan is AlphaZero over the average human? How many dan is KataGo? I read it’s about 9 stones above humans.
What is the best possible agent at? 11?
Thinking of it as ‘stones’ illustrates what I am saying. In the physical world, intelligence gives a diminishing advantage. It could mean so long as humans are even still “in the running” with the aid of synthetic tools like open agency AI, we can defeat AI superintelligence in conflicts, even if that superintelligence is infinitely smart. We have to have a resource advantage—such as being allowed extra stones in the Go match—but we can win.
Eliezer assumes that the advantage of intelligence scales forever, when it obviously doesn’t. (note that this uses baked in assumptions. If say physics has a major useful exploit humans haven’t found, this breaks, the infinitely intelligent AI finds the exploit and tiles the universe)
I agree that the actual speed improvement for the optimized code can’t go to infinity, since you can only optimize code so much. This is an example of diminishing returns due to the task itself having a bound. I think this general argument (that the task itself has a bound in how well you can do) is a central part of your confidence that diminishing returns will be ubiquitous.
This is where I think we break. How many dan is AlphaZero over the average human? How many dan is KataGo? I read it’s about 9 stones above humans.
What is the best possible agent at? 11?
Thinking of it as ‘stones’ illustrates what I am saying. In the physical world, intelligence gives a diminishing advantage. It could mean so long as humans are even still “in the running” with the aid of synthetic tools like open agency AI, we can defeat AI superintelligence in conflicts, even if that superintelligence is infinitely smart. We have to have a resource advantage—such as being allowed extra stones in the Go match—but we can win.
Eliezer assumes that the advantage of intelligence scales forever, when it obviously doesn’t. (note that this uses baked in assumptions. If say physics has a major useful exploit humans haven’t found, this breaks, the infinitely intelligent AI finds the exploit and tiles the universe)