The context of the OP, the hypothetical intelligence explosion, pretty much assumes this interpretation.
At the very least, it assumes that an AGI will be G enough to take a look at its own “code” (whatever symbolic substrate it uses for encoding the computations that define it, which may not necessarily look like the “source code” we are familiar with, though it may well start off being a human invention) and figure out how to change that code so as to become an even more effective optimizer.
“Create more value” doesn’t in and of itself lead to an intelligence explosion. It’s something that would be nice to have, but not a game-changer.
That cross-domain thing, which is where we still have the lead, is a game-changer. (Dribbling a ping-pong ball is cute, but I want to know what the thing will do with an egg. Dribbling the egg is right out. Figuring out the egg is food, that’s the kind of thing you want an AGI capable of.)
The context of the OP, the hypothetical intelligence explosion, pretty much assumes this interpretation.
At the very least, it assumes that an AGI will be G enough to take a look at its own “code” (whatever symbolic substrate it uses for encoding the computations that define it, which may not necessarily look like the “source code” we are familiar with, though it may well start off being a human invention) and figure out how to change that code so as to become an even more effective optimizer.
“Create more value” doesn’t in and of itself lead to an intelligence explosion. It’s something that would be nice to have, but not a game-changer.
That cross-domain thing, which is where we still have the lead, is a game-changer. (Dribbling a ping-pong ball is cute, but I want to know what the thing will do with an egg. Dribbling the egg is right out. Figuring out the egg is food, that’s the kind of thing you want an AGI capable of.)