Y’know, I’m not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I’ve never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).
Y’know, I’m not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I’ve never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).
“optimization power” is more-or-less equivalent to “intelligence”, in local parlance. Do you have a different definition of intelligence in mind?
One that doesn’t classify evolution as intelligent.
So the nonapples theory of intelligence, then?
More generally, a theory that requires modeling of the future for something to be intelligent.