Another argument against the difficulties of self-modeling point: It’s possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common.
It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself.
Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn’t trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed.
What if it just works on having a better understanding of math, logic, and probability?
Another argument against the difficulties of self-modeling point: It’s possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common.
It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself.
Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn’t trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed.
What if it just works on having a better understanding of math, logic, and probability?