Hal:
At some point, you’ve improved a computer program. You had to decide, somehow, what tradeoffs to make, on your own. We should assume that a superhuman AI will be at least as good at improving programs as we are.
I can’t think of any programs of broad scope that I would call unimprovable. (The AI might not be able to improve this algorithm this iteration, but if it really declares itself perfectly optimized, I’d expect we would declare it broken. In fact that sounds like the EURISKO. An AGI should a least keep trying.)
Also: Any process that it knows how to do, that it has learned, it can implement in its own code, so it does not have to ‘think things out’ with it’s high-level thinking algorithms. This is repeatable for everything it learns.
(We can’t do the same thing to create an AI because we don’t have access to our algorithms or really even our memories. If an AI can learn to recognize breeds of dogs, then it can trace its own thoughts to determine by what process it does that. Since the learning algorithm probably isn’t perfectly optimized to learn how to recognize dogs, the learned process it is using is probably not perfectly efficient.)
The metacognitive level becoming part of the object level lets you turn knowledge and metaknowledge directly into cognitive improvements. For every piece of knowledge, including knowledge about how to program.
Hal: At some point, you’ve improved a computer program. You had to decide, somehow, what tradeoffs to make, on your own. We should assume that a superhuman AI will be at least as good at improving programs as we are.
I can’t think of any programs of broad scope that I would call unimprovable. (The AI might not be able to improve this algorithm this iteration, but if it really declares itself perfectly optimized, I’d expect we would declare it broken. In fact that sounds like the EURISKO. An AGI should a least keep trying.)
Also: Any process that it knows how to do, that it has learned, it can implement in its own code, so it does not have to ‘think things out’ with it’s high-level thinking algorithms. This is repeatable for everything it learns. (We can’t do the same thing to create an AI because we don’t have access to our algorithms or really even our memories. If an AI can learn to recognize breeds of dogs, then it can trace its own thoughts to determine by what process it does that. Since the learning algorithm probably isn’t perfectly optimized to learn how to recognize dogs, the learned process it is using is probably not perfectly efficient.)
The metacognitive level becoming part of the object level lets you turn knowledge and metaknowledge directly into cognitive improvements. For every piece of knowledge, including knowledge about how to program.