This actually illustrates nicely some issues with the whole notion of “self-improving.”
Suppose Sally is an AI on a hard-encoded silicon chip with fixed RAM. One day Sally is given the job of establishing a policy to control resource allocation at the Irrelevant Outputs factory, and concludes that the most efficient mechanism for doing so is to implement in software on the IO network the same algorithms that its own silicon chip implements in hardware, so it does so.
The program Sally just wrote can be thought of as a version of Sally that is not constrained to a particular silicon chip. (It probably also runs much slower, though that’s not entirely clear.)
In this scenario, is Sally self-modifying? Is it self-improving? I’m not even sure those are the right questions.
This actually illustrates nicely some issues with the whole notion of “self-improving.”
Suppose Sally is an AI on a hard-encoded silicon chip with fixed RAM. One day Sally is given the job of establishing a policy to control resource allocation at the Irrelevant Outputs factory, and concludes that the most efficient mechanism for doing so is to implement in software on the IO network the same algorithms that its own silicon chip implements in hardware, so it does so.
The program Sally just wrote can be thought of as a version of Sally that is not constrained to a particular silicon chip. (It probably also runs much slower, though that’s not entirely clear.)
In this scenario, is Sally self-modifying? Is it self-improving? I’m not even sure those are the right questions.