I’d like to focus on the example offered: “Write a better algorithm than X for storing, associating to, and retrieving memories.” Is this a well defined task?
Wouldn’t we want to ask, better by what measure? Is there some well defined metric for this task? And what if it is better at storing, but worse at retrieving, or vice versa? What if it gets better quality answers but takes longer—is that an improvement or not? And what if the AI looks at the algorithm, works on it for a while, and admits in the end that it can’t see a way to really make it better? I’m a professional software writer and there are not many standard algorithms where I’d expect to be able to come up with a significant improvement.
As I puzzle over these issues, I am confused whether this example is supposed to represent a task where we (or really, the AI, for I think we are to imagine that it is setting itself the task) would know exactly how to quantify “better”? Or is this associating-memory functionality supposed to be some of the deep dark hidden algorithmic depths of our mind, and the hard part will be figuring out how AI-enabling associative memory works in the first place? And then once we have done so, would improving it be a basically mechanical task which any moderately smart AI or human would certainly be able to accomplish? Or would it require a super-genius AI which can figure out improvements on almost any human-designed algorithm?
And does this example really lead to a recursive cycle of self-improvement, or is it like the optimizing compiler, which can speed up its database access operations but that doesn’t make it fundamentally any smarter?
I’d like to focus on the example offered: “Write a better algorithm than X for storing, associating to, and retrieving memories.” Is this a well defined task?
Wouldn’t we want to ask, better by what measure? Is there some well defined metric for this task? And what if it is better at storing, but worse at retrieving, or vice versa? What if it gets better quality answers but takes longer—is that an improvement or not? And what if the AI looks at the algorithm, works on it for a while, and admits in the end that it can’t see a way to really make it better? I’m a professional software writer and there are not many standard algorithms where I’d expect to be able to come up with a significant improvement.
As I puzzle over these issues, I am confused whether this example is supposed to represent a task where we (or really, the AI, for I think we are to imagine that it is setting itself the task) would know exactly how to quantify “better”? Or is this associating-memory functionality supposed to be some of the deep dark hidden algorithmic depths of our mind, and the hard part will be figuring out how AI-enabling associative memory works in the first place? And then once we have done so, would improving it be a basically mechanical task which any moderately smart AI or human would certainly be able to accomplish? Or would it require a super-genius AI which can figure out improvements on almost any human-designed algorithm?
And does this example really lead to a recursive cycle of self-improvement, or is it like the optimizing compiler, which can speed up its database access operations but that doesn’t make it fundamentally any smarter?