I took this a different way: what’s the correlation between resources and winning conflicts for humans on earth? Assuming the curve is the same as for chess, what elo does that place human conflicts at?
Depends. Depends on the communication technology of the era, training, quality of leaders, whether all the forces are under a single unified command, and so on.
The main takeaway from this is not that. It’s that increasing intelligence has diminishing returns. That a hypothetical “perfect policy” AI general, with an ELO equivalent to almost infinity, can be crushed by “humans with AI tools to help” with an ELO of say 5000 (1000 would be average human general) with a very small resource advantage. Say 30 percent more forces, or their forces are inferior in technology but they have 2-3 times as many.
And a force disparity where humans with their 1000 ELO win is also possible.
This is because of the nature of what intelligence is. Each bit of policy complexity over a random policy has diminishing returns. The highest yield policy is what you tend to find first “let’s have all our forces get in a line so they won’t hit each other and start blasting” and each improvement has smaller gains. (Or in chess, “let’s put my higher value pieces in spots where a lower value piece cannot capture them on the very next move”)
I took this a different way: what’s the correlation between resources and winning conflicts for humans on earth? Assuming the curve is the same as for chess, what elo does that place human conflicts at?
Depends. Depends on the communication technology of the era, training, quality of leaders, whether all the forces are under a single unified command, and so on.
The main takeaway from this is not that. It’s that increasing intelligence has diminishing returns. That a hypothetical “perfect policy” AI general, with an ELO equivalent to almost infinity, can be crushed by “humans with AI tools to help” with an ELO of say 5000 (1000 would be average human general) with a very small resource advantage. Say 30 percent more forces, or their forces are inferior in technology but they have 2-3 times as many.
And a force disparity where humans with their 1000 ELO win is also possible.
This is because of the nature of what intelligence is. Each bit of policy complexity over a random policy has diminishing returns. The highest yield policy is what you tend to find first “let’s have all our forces get in a line so they won’t hit each other and start blasting” and each improvement has smaller gains. (Or in chess, “let’s put my higher value pieces in spots where a lower value piece cannot capture them on the very next move”)