Suppose you give 2 people a maths quiz. One person is an average maths undergrad. The other is Terry Tao. The quiz was very easy, it was from the kids book “counting made fun”, so both people got all the questions right. You use this to “prove the sharply diminishing returns of maths skill”. That’s what you are doing with your 99.999% accuracy things.
The human advantage over chimps doesn’t look like humans being 99.999% accurate about where the banana is, while chimps are only 99% accurate. That extra accuracy only buys you a tiny extra sliver of banana.
The human intelligence advantage looks like humans having a whole new space of questions that chimps can’t comprehend. Some of these, like the electrical conductivity of copper, the mass of the earth, or that there are infinitely many primes are known with great confidence. If you can’t figure out the proof, you have basically no clue if there are infinitely many primes, if you can figure it out, your basically certain. In domains where verifying is easier than finding, like maths proofs, then there is a step function from basically no clue to basically certain with a small increase in intelligence.
Reality is full of step functions. Small differences in engineering skill between a rocket working and exploding. A moon mission designed by 10% stupider engineers won’t get 90% of the way to the moon and bring back 90% as much moonrock.
AI progress currently looks quite full of big jumps. Lots of papers coming along a few months later with a substantial improvement. There is no law of physics that says making something smart is that hard. If this trend continued, we would expect a fairly rapid ascent into superintelligence, even if humans were doing all the research. Sharply diminishing returns can be a thing. But they only happen when you are pushing close to some physical limits. The human brain is, I suspect, a long long way from any kind of limits.
Suppose you give 2 people a maths quiz. One person is an average maths undergrad. The other is Terry Tao. The quiz was very easy, it was from the kids book “counting made fun”, so both people got all the questions right. You use this to “prove the sharply diminishing returns of maths skill”. That’s what you are doing with your 99.999% accuracy things.
The human advantage over chimps doesn’t look like humans being 99.999% accurate about where the banana is, while chimps are only 99% accurate. That extra accuracy only buys you a tiny extra sliver of banana.
The human intelligence advantage looks like humans having a whole new space of questions that chimps can’t comprehend. Some of these, like the electrical conductivity of copper, the mass of the earth, or that there are infinitely many primes are known with great confidence. If you can’t figure out the proof, you have basically no clue if there are infinitely many primes, if you can figure it out, your basically certain. In domains where verifying is easier than finding, like maths proofs, then there is a step function from basically no clue to basically certain with a small increase in intelligence.
Reality is full of step functions. Small differences in engineering skill between a rocket working and exploding. A moon mission designed by 10% stupider engineers won’t get 90% of the way to the moon and bring back 90% as much moonrock.
AI progress currently looks quite full of big jumps. Lots of papers coming along a few months later with a substantial improvement. There is no law of physics that says making something smart is that hard. If this trend continued, we would expect a fairly rapid ascent into superintelligence, even if humans were doing all the research. Sharply diminishing returns can be a thing. But they only happen when you are pushing close to some physical limits. The human brain is, I suspect, a long long way from any kind of limits.