I would like to add also that learning is the best known way of self-improvement. One can get a strategy which could raise its effective intelligence several orders of magnitude. (One of such strategies is: “if you have a question, ask Google” :)
Also even AI not capable to self improvement or self modification could still be very strong and very dangerous, if it have IQ 200, and works very quickly. It does not need to self-improve to take over the Internet and create virus that will kill all humans. In fact this means that condition of ability to self-improve is unnecessary in the Friendly AI research.
But if an AI does not know its own source code or even basic principles of which it is created it would not be able create strong subagent. So here maybe temporary solution: AI could work in outside world, except one black box, which contains its own source code (assuming that no other similar codes exist outside, which hardly will happen).
I would like to add also that learning is the best known way of self-improvement. One can get a strategy which could raise its effective intelligence several orders of magnitude. (One of such strategies is: “if you have a question, ask Google” :)
Also even AI not capable to self improvement or self modification could still be very strong and very dangerous, if it have IQ 200, and works very quickly. It does not need to self-improve to take over the Internet and create virus that will kill all humans. In fact this means that condition of ability to self-improve is unnecessary in the Friendly AI research.
But if an AI does not know its own source code or even basic principles of which it is created it would not be able create strong subagent. So here maybe temporary solution: AI could work in outside world, except one black box, which contains its own source code (assuming that no other similar codes exist outside, which hardly will happen).