I’d like to see more discussion of this, I read some of the FOOM debate but I’m assuming that there has been more discussion of this important issue since?
I suppose the key question is for recursive self-improvement. We can give hardware improvement (improved hardware allows design of more complex and better hardware) because we are on the treadmill already. But how likely is algorithmic self-improvement. For an intelligence to be able to improve itself algorithmically the following seem to need to hold.
The system needs to understand itself
There has to be some capacity that can be improved without detriment to some other capacity (else you are doing some self-optimization and not necessarily improvement)
If it is the memeplex that gives us our generality (as is suggested by our flowering of discovery over the past 250 years compared to the past 300,000 years of homo sapiens), it might not be understandable. It would be in the weights or equivalents in whatever the AI uses. No human would understand it either.
Fiddling about with weights without knowledge would likely lead to trade offs and so you might not have the second consideration holding.
I’m not saying AI won’t change history, but we need an accurate view of how it will change things.
On the matter of software improvements potentially available during recursive self-improvement, we can look at the current pace of algorithmic improvement, which has been probably faster than scaling for some time now. So that’s another lower bound on what AI will be capable of, assuming that the extrapolation holds up.
I’m wary about that one, because that isn’t a known “general” intelligence architecture, so we can expect AIs to make better learning algorithms for deep neural networks, but not necessarily themselves.
I’d like to see more discussion of this, I read some of the FOOM debate but I’m assuming that there has been more discussion of this important issue since?
I suppose the key question is for recursive self-improvement. We can give hardware improvement (improved hardware allows design of more complex and better hardware) because we are on the treadmill already. But how likely is algorithmic self-improvement. For an intelligence to be able to improve itself algorithmically the following seem to need to hold.
The system needs to understand itself
There has to be some capacity that can be improved without detriment to some other capacity (else you are doing some self-optimization and not necessarily improvement)
If it is the memeplex that gives us our generality (as is suggested by our flowering of discovery over the past 250 years compared to the past 300,000 years of homo sapiens), it might not be understandable. It would be in the weights or equivalents in whatever the AI uses. No human would understand it either.
Fiddling about with weights without knowledge would likely lead to trade offs and so you might not have the second consideration holding.
I’m not saying AI won’t change history, but we need an accurate view of how it will change things.
On the matter of software improvements potentially available during recursive self-improvement, we can look at the current pace of algorithmic improvement, which has been probably faster than scaling for some time now. So that’s another lower bound on what AI will be capable of, assuming that the extrapolation holds up.
I’m wary about that one, because that isn’t a known “general” intelligence architecture, so we can expect AIs to make better learning algorithms for deep neural networks, but not necessarily themselves.