Ok, thanks. As far as I see, this is the most important core objection then.
There’s actually a second big unknown too before getting into full singularitarism, whether this kind of human-equivalent AI could easily boost itself to strongly superhuman levels with any sort of ease.
But the question of just how difficult it is to build the learning baby AI is really important, and I don’t have any good ideas on how to estimate it except from stuff that can be figured out from biology. The human genome gives us the number of bits that keeps passing through evolution and the general initial complexity for humans, but it’s big enough that without a very good design sense trying to navigate that kind of design space would indeed take generations. Brains and learning have been evolving for a very long time, indicating that the machinery may be very elaborate to get right. Compared to this, symbolic language seems to have popped up very quickly in evolution, which gives reason to believe that once there’s a robust nonverbal cognitive architecture, adding symbolic cognition capabilities isn’t nearly as hard as getting the basic architecture together.
It might also be that the selective pressure in favor of increased intelligence increased suddenly, most likely as a result of competition among humans.
Once a singleton AI becomes marginally smarter than the smartest human, how are we to distinguish between further advances in intelligence as opposed to, say, an increase in it’s ability to impress us with high-tech parlor tricks? Would there be competition between AIs, and if so, over what?
Ok, thanks. As far as I see, this is the most important core objection then.
There’s actually a second big unknown too before getting into full singularitarism, whether this kind of human-equivalent AI could easily boost itself to strongly superhuman levels with any sort of ease.
But the question of just how difficult it is to build the learning baby AI is really important, and I don’t have any good ideas on how to estimate it except from stuff that can be figured out from biology. The human genome gives us the number of bits that keeps passing through evolution and the general initial complexity for humans, but it’s big enough that without a very good design sense trying to navigate that kind of design space would indeed take generations. Brains and learning have been evolving for a very long time, indicating that the machinery may be very elaborate to get right. Compared to this, symbolic language seems to have popped up very quickly in evolution, which gives reason to believe that once there’s a robust nonverbal cognitive architecture, adding symbolic cognition capabilities isn’t nearly as hard as getting the basic architecture together.
It might also be that the selective pressure in favor of increased intelligence increased suddenly, most likely as a result of competition among humans.
Once a singleton AI becomes marginally smarter than the smartest human, how are we to distinguish between further advances in intelligence as opposed to, say, an increase in it’s ability to impress us with high-tech parlor tricks? Would there be competition between AIs, and if so, over what?