Humans can learn, that is far from what is necessary to reach a level above your own, on your own.
Yes, you also need the ability to self-modify and the ability to take 20 or fail and keep going. But I just argued that the phrase “on your own” obscures the issue, because if one AGI has a chance to rewrite itself (and does not take over the world) then I see no realistic way to stop another from trying at some point.
Also, how do you know that any given level of intelligence is capable of handling its own complexity effectively?
I don’t think I need to talk about “any given level”. If humans maintain a civilization long enough (and I don’t necessarily accept Eliezer’s rough timetable here) we’ll understand our own level well enough to produce human-strength AGI directly or indirectly. By definition, the resulting AI will have at least a chance of understanding the process that produced it, given time. (When I try to think of an exception I find myself thinking of uploads, and perhaps byzantine programs that evolved inside computers. These might in theory fail to understand all but the human-designed parts of the process. But the second example seems unlikely on reflection, as it suggests vast amounts of wasted computation. Likewise—though I don’t know how much importance to attach to this—it seems to this layman as if biologists laugh at uploads and consider them a much harder problem than an AI with the power to program.Yet you’d need detailed knowledge of the brain’s biology to make an upload.) And of course it can think faster than we do in many areas (or if it can’t due to artificial restrictions, the next one can).
I don’t think that intelligence can be applied to itself efficiently.
You’ve established inefficiency as a logical possibility (in my judgement) but don’t seem to have given much argument for it. I count two sentences on your P2 that directly address the issue. And you have yet to engage with the cumulative probability argument. Note that a human-level AGI which can see problems or risks of self-modification may also see risk in avoiding it.
Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible,
If it literally has no other goals then it doesn’t sound like an AGI. The phrase “potential for superhuman intelligence” sounds like it refers to a part of the program that other people could (and, in my view, will) use to create a super-intelligence by combining it with more dangerous goals.
Yes, you also need the ability to self-modify and the ability to take 20 or fail and keep going. But I just argued that the phrase “on your own” obscures the issue, because if one AGI has a chance to rewrite itself (and does not take over the world) then I see no realistic way to stop another from trying at some point.
I don’t think I need to talk about “any given level”. If humans maintain a civilization long enough (and I don’t necessarily accept Eliezer’s rough timetable here) we’ll understand our own level well enough to produce human-strength AGI directly or indirectly. By definition, the resulting AI will have at least a chance of understanding the process that produced it, given time. (When I try to think of an exception I find myself thinking of uploads, and perhaps byzantine programs that evolved inside computers. These might in theory fail to understand all but the human-designed parts of the process. But the second example seems unlikely on reflection, as it suggests vast amounts of wasted computation. Likewise—though I don’t know how much importance to attach to this—it seems to this layman as if biologists laugh at uploads and consider them a much harder problem than an AI with the power to program.Yet you’d need detailed knowledge of the brain’s biology to make an upload.) And of course it can think faster than we do in many areas (or if it can’t due to artificial restrictions, the next one can).
You’ve established inefficiency as a logical possibility (in my judgement) but don’t seem to have given much argument for it. I count two sentences on your P2 that directly address the issue. And you have yet to engage with the cumulative probability argument. Note that a human-level AGI which can see problems or risks of self-modification may also see risk in avoiding it.
If it literally has no other goals then it doesn’t sound like an AGI. The phrase “potential for superhuman intelligence” sounds like it refers to a part of the program that other people could (and, in my view, will) use to create a super-intelligence by combining it with more dangerous goals.