If I had to take an honest guess? Theoretical discovery will behave “inefficiently” when it requires a breadth-first (or at least, breadth-focused) search through the idea space before you can find things that “fit together”. Only once you have a bunch of things which “fit together” can you look at the shape of the “hole in idea-space” they all border, dive to the bottom of that lake, and bring up an entirely new idea which links them or unifies them.
So:
1) Mostly agreed, as described above.
2) As described above. My reasoning is sociological: our current reward system for researchers optimizes our research process for depth rather than breadth. Looking where others are paid not to look would usually be a decent way to find things others haven’t seen.
3) I don’t know more than the bare minimum about decision theory, so I can’t say.
Now, as to analysis of an intelligence explosion taking such a long time, I have an Opinion (beware the capital letter): there may not be an intelligence explosion. Current research into AI indicates that coming up with formal models of utility-optimal agents in unknown active environments is the easy part, making them conscious and (stably) self-modifying is the “next part” currently under research, and scaling them down to fit inside the real universe is the hard part.
Schmidhuber (and when it comes to UAI I definitely root for the Schmidhubristic team ;-)) has claimed that Goedel Machine self-rewrites would dramatically speed up a mere AIXI paired with a mere HSearch until they became effective within the real world, but that’s putting his faith in the Goedel Machine’s proof searcher rather than in his (and his student’s) own proofs that their algorithms genuinely are optimal and the problems they’re tackling genuinely do have these nasty time-and-space bounds. If the first team to build an AI has to either keep it in an exponentially-small environment (existing AIXI models) or wait for astronomical periods of time for even the first self-rewrite, then the human race will die of an asteroid strike long before Friend Clippy can take us out.
This is the same nasty problem that AI has faced since the GOFAI days: easy to find an algorithm that locates the optimal move by searching the entire solution space, hard as hell to prove it will take any usefully small period of time to do so.
If I had to take an honest guess? Theoretical discovery will behave “inefficiently” when it requires a breadth-first (or at least, breadth-focused) search through the idea space before you can find things that “fit together”. Only once you have a bunch of things which “fit together” can you look at the shape of the “hole in idea-space” they all border, dive to the bottom of that lake, and bring up an entirely new idea which links them or unifies them.
So:
1) Mostly agreed, as described above.
2) As described above. My reasoning is sociological: our current reward system for researchers optimizes our research process for depth rather than breadth. Looking where others are paid not to look would usually be a decent way to find things others haven’t seen.
3) I don’t know more than the bare minimum about decision theory, so I can’t say.
Now, as to analysis of an intelligence explosion taking such a long time, I have an Opinion (beware the capital letter): there may not be an intelligence explosion. Current research into AI indicates that coming up with formal models of utility-optimal agents in unknown active environments is the easy part, making them conscious and (stably) self-modifying is the “next part” currently under research, and scaling them down to fit inside the real universe is the hard part.
Schmidhuber (and when it comes to UAI I definitely root for the Schmidhubristic team ;-)) has claimed that Goedel Machine self-rewrites would dramatically speed up a mere AIXI paired with a mere HSearch until they became effective within the real world, but that’s putting his faith in the Goedel Machine’s proof searcher rather than in his (and his student’s) own proofs that their algorithms genuinely are optimal and the problems they’re tackling genuinely do have these nasty time-and-space bounds. If the first team to build an AI has to either keep it in an exponentially-small environment (existing AIXI models) or wait for astronomical periods of time for even the first self-rewrite, then the human race will die of an asteroid strike long before Friend Clippy can take us out.
This is the same nasty problem that AI has faced since the GOFAI days: easy to find an algorithm that locates the optimal move by searching the entire solution space, hard as hell to prove it will take any usefully small period of time to do so.