The rapidity of evolution from chimp to human is remarkable, but you can infer what you’re trying to infer only if you believe evolution reliably produces steadily more intelligent creatures. It might be that conditions temporarily favored intelligence, leading to humans; our rapid rise is then explained by the anthropic principle, not by universal evolutionary dynamics.
Knowledge = all that actual science, engineering, and general knowledge accumulation we did = integral of cognition+metaknowledge(current knowledge) over time, where knowledge feeds upon itself in what seems to be a roughly exponential process
Knowledge feeds on itself only when it is continually spread out over new domains. If you keep trying to learn more about the same domain—say, to cure cancer, or make faster computer chips—you get logarithmic returns, requiring an exponential increase in resources to maintain constant output. (IIRC it has required exponentially-increasing capital investments to keep Moore’s Law going; the money will run out before the science does.) Rescher wrote about this in the 1970s and 1980s.
This is important because it says that, if an AI keeps trying to learn how to improve itself, it will get only logarithmic returns.
When you fold a complicated, choppy, cascade-y chain of differential equations in on itself via recursion, it should either flatline or blow up. You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole.
This is the most important and controversial claim, so I’d like to see it better-supported. I understand the intuition; but it is convincing as an intuition only if you suppose there are no negative feedback mechanisms anywhere in the whole process, which seems unlikely.
The rapidity of evolution from chimp to human is remarkable, but you can infer what you’re trying to infer only if you believe evolution reliably produces steadily more intelligent creatures. It might be that conditions temporarily favored intelligence, leading to humans; our rapid rise is then explained by the anthropic principle, not by universal evolutionary dynamics.
Knowledge feeds on itself only when it is continually spread out over new domains. If you keep trying to learn more about the same domain—say, to cure cancer, or make faster computer chips—you get logarithmic returns, requiring an exponential increase in resources to maintain constant output. (IIRC it has required exponentially-increasing capital investments to keep Moore’s Law going; the money will run out before the science does.) Rescher wrote about this in the 1970s and 1980s.This is important because it says that, if an AI keeps trying to learn how to improve itself, it will get only logarithmic returns.
This is the most important and controversial claim, so I’d like to see it better-supported. I understand the intuition; but it is convincing as an intuition only if you suppose there are no negative feedback mechanisms anywhere in the whole process, which seems unlikely.