Whether not there is a good definition of intelligence depends on whether there is a sufficiently unitary concept there to be defined. That is crucial because it also determines whether AI is seedable or not.
Think about a clever optimising compiler that runs a big search looking for clever ways of coding the source that it is compiling. Perhaps it is in a competitions based on compiling a variety of programs, running them and measuring their performance. Now use it to compile itself. It runs faster, so it can search more deeply, and produce cleverer, faster code. So use it to compile itself again!
One hopes that the speed ups from successive self-compilations keep adding a little. 1, 1+r, 1+r+r², 1+r+r²+r³ If it works like that, then the limiting speed up is 1/(1-r) with a singularity at r=1 when the software wakes up. So far software disappoints these hopes. The tricks work once, add a tiny improvement second time around, and makes things worse on the third go for complicated and impenetrable reasons. It appears very different from the example of a nuclear reactor in which each round of neutron multiplication is like the previous round and runaway is a real possibility.
The core issue is the precise sense in which intelligence is real. If it is real in the sense of there being a unifying, codify-able theme, then we can define it and write a seed AI. But maybe it is real only in the “I know it when I see it” sense. Each increment is unique and never comes as “more of the same”.
Whether not there is a good definition of intelligence depends on whether there is a sufficiently unitary concept there to be defined. That is crucial because it also determines whether AI is seedable or not.
Think about a clever optimising compiler that runs a big search looking for clever ways of coding the source that it is compiling. Perhaps it is in a competitions based on compiling a variety of programs, running them and measuring their performance. Now use it to compile itself. It runs faster, so it can search more deeply, and produce cleverer, faster code. So use it to compile itself again!
One hopes that the speed ups from successive self-compilations keep adding a little. 1, 1+r, 1+r+r², 1+r+r²+r³ If it works like that, then the limiting speed up is 1/(1-r) with a singularity at r=1 when the software wakes up. So far software disappoints these hopes. The tricks work once, add a tiny improvement second time around, and makes things worse on the third go for complicated and impenetrable reasons. It appears very different from the example of a nuclear reactor in which each round of neutron multiplication is like the previous round and runaway is a real possibility.
The core issue is the precise sense in which intelligence is real. If it is real in the sense of there being a unifying, codify-able theme, then we can define it and write a seed AI. But maybe it is real only in the “I know it when I see it” sense. Each increment is unique and never comes as “more of the same”.