I agree with Ben, and also, humanity successfully sent a spaceship to the moon surface on the second attempt and successfully sent people (higher stakes) to the moon surface on the first attempt. This shows that difficult technological problems can be solved without extensive trial and error. (Obviously some trial and error on easier problems was done to get to the point of landing on the moon, and no doubt the same will be true of AGI. But, there is hope that the actual AGI can be constructed without trial and error, or at least without the sort of trial and error where error is potentially catastrophic.)
The trouble with this problem is the rocket used for this was a system of welded and bolted together parts. The functions and rules of each system remained the same throughout the flight and thus it was possible to model. Self improving AI, it would be like if we used the rocket exhaust from the Saturn V to melt metal used in other parts of the rocket during the flight to the Moon.
I can see a way to do self-improving AI : separate modular subsystems, each being evaluated by some connection either directly or indirectly to the real world. But in that case, while each subsystem may be a “black box” that is ever-evolving, basically the function remains the same. Like you might have a box that re-renders scenes from a camera without shadows. And there’s feedback and ways it can get better at it’s job. And there’s a meta-system that can gut the architecture of that box and replace it with a new internal way to do this task. But, all of the time, the box is still just subtracting shadows, it never does anything else.
I don’t think we need to explicitly plan for the later stages. If we have a sufficiently advanced AI that we know is aligned and capable of intelligently self-modifying without becoming unaligned, we can probably put more confidence in the seed AI’s ability to construct the final stages than in our ability to shape the seed AI to better construct the final stages.
Edit: that’s insufficient. What I mean is that once you make the seed AI I described, any change you make to the seed AI that’s explicitly for the purpose of guiding its takeoff will be practically useless and possibly harmful given the AI’s advantage. I think we may reach a point where we can trust the seed AI to do the job well better than we can trust ourselves to do the job well.
I agree with Ben, and also, humanity successfully sent a spaceship to the moon surface on the second attempt and successfully sent people (higher stakes) to the moon surface on the first attempt. This shows that difficult technological problems can be solved without extensive trial and error. (Obviously some trial and error on easier problems was done to get to the point of landing on the moon, and no doubt the same will be true of AGI. But, there is hope that the actual AGI can be constructed without trial and error, or at least without the sort of trial and error where error is potentially catastrophic.)
The trouble with this problem is the rocket used for this was a system of welded and bolted together parts. The functions and rules of each system remained the same throughout the flight and thus it was possible to model. Self improving AI, it would be like if we used the rocket exhaust from the Saturn V to melt metal used in other parts of the rocket during the flight to the Moon.
I can see a way to do self-improving AI : separate modular subsystems, each being evaluated by some connection either directly or indirectly to the real world. But in that case, while each subsystem may be a “black box” that is ever-evolving, basically the function remains the same. Like you might have a box that re-renders scenes from a camera without shadows. And there’s feedback and ways it can get better at it’s job. And there’s a meta-system that can gut the architecture of that box and replace it with a new internal way to do this task. But, all of the time, the box is still just subtracting shadows, it never does anything else.
I don’t think we need to explicitly plan for the later stages. If we have a sufficiently advanced AI that we know is aligned and capable of intelligently self-modifying without becoming unaligned, we can probably put more confidence in the seed AI’s ability to construct the final stages than in our ability to shape the seed AI to better construct the final stages.
Edit: that’s insufficient. What I mean is that once you make the seed AI I described, any change you make to the seed AI that’s explicitly for the purpose of guiding its takeoff will be practically useless and possibly harmful given the AI’s advantage. I think we may reach a point where we can trust the seed AI to do the job well better than we can trust ourselves to do the job well.