Bostrom talks about a seed AI being able to improve its ‘architecture’, presumably as opposed to lower level details like beliefs. Why would changing architecture be particularly important?
One way changing architecture could be particularly important is improvement in the space- or time-complexity of its algorithms. A seed AI with a particular set of computational resources that improves its architecture to make decisions in (for example) logarithmic time instead of linear could markedly advance along the “speed superintelligence” spectrum through such an architectural self-modification.
One’s answer depends on how imaginative one wants to get.
One situation is if the AI were to realize we had unknowingly trapped it in too deep a local optimum fitness valley, for it to progress upward significantly w/o significant rearchitecting. We might ourselves be trapped in a local optimality bump or depression, and have transferred some resultant handicap to our AI progeny. if it, with computationally enhanced resources, can “understand” indirectly that it is missing something (analogy: we can detect “invisible” celestial objects by noting perturbations in what we can see, using computer modeling and enhanced instrumentation), it might realize a fundamental blind spot was engineered-in, and redesign is needed. (E.G, what if it realizes it needs to have emotion—or different emotions—for successful personal evolution toward enlightenment? What if it is more interested in beauty and aesthetics, than finding deep theorems and proving string theory? We don’t really know, collectively, what “superintelligence” is.
To the hammer, the whole world looks like…
How do we know some logical positivist engineer’s vision of AI nirvanna, will be shared by the AI? How many kids would rather be a painter than a Harvard MBA “just like daddy planned for?”
Maybe the AIs will find things that are “analog”, like art, more interesting than what they know in advance they can do, like anything computable, which becomes relatively uninteresting?
What will they find worth doing, if success at anything fitting halting problem parameters (and they might extend and complete those theorems first) is already a given?
Bostrom talks about a seed AI being able to improve its ‘architecture’, presumably as opposed to lower level details like beliefs. Why would changing architecture be particularly important?
One way changing architecture could be particularly important is improvement in the space- or time-complexity of its algorithms. A seed AI with a particular set of computational resources that improves its architecture to make decisions in (for example) logarithmic time instead of linear could markedly advance along the “speed superintelligence” spectrum through such an architectural self-modification.
One’s answer depends on how imaginative one wants to get. One situation is if the AI were to realize we had unknowingly trapped it in too deep a local optimum fitness valley, for it to progress upward significantly w/o significant rearchitecting. We might ourselves be trapped in a local optimality bump or depression, and have transferred some resultant handicap to our AI progeny. if it, with computationally enhanced resources, can “understand” indirectly that it is missing something (analogy: we can detect “invisible” celestial objects by noting perturbations in what we can see, using computer modeling and enhanced instrumentation), it might realize a fundamental blind spot was engineered-in, and redesign is needed. (E.G, what if it realizes it needs to have emotion—or different emotions—for successful personal evolution toward enlightenment? What if it is more interested in beauty and aesthetics, than finding deep theorems and proving string theory? We don’t really know, collectively, what “superintelligence” is. To the hammer, the whole world looks like… How do we know some logical positivist engineer’s vision of AI nirvanna, will be shared by the AI? How many kids would rather be a painter than a Harvard MBA “just like daddy planned for?” Maybe the AIs will find things that are “analog”, like art, more interesting than what they know in advance they can do, like anything computable, which becomes relatively uninteresting? What will they find worth doing, if success at anything fitting halting problem parameters (and they might extend and complete those theorems first) is already a given?