Contrary to the conditions of Bostrom’s intelligence explosion scenario, we have identified ways in which the recalcitrance of prediction, an important instrumental reasoning task, is prohibitively high.
To demonstrate how such an analysis could work, we analyzed the recalcitrance of prediction, using a Bayesian model of a predictive agent. We found that the barriers to recursive self-improvement through algorithmic changes is prohibitively high for an intelligence explosion.
No, you didn’t. You showed that there exists an upper bound on the amount of improvement that can be had from algorithmic changes in the limit. This is a very different claim. What we care about is what happens within the range close to human intelligence; it doesn’t matter that there’s a limit on how far recursive self-improvement can go, if that limit is far into the superhuman range. You equivocate between “recursive self-improvement must eventually stop somewhere”, which I believe is already widely accepted, and “recursive self-improvement will not happen”, which is a subject of significant controversy.
Agreed the quoted “we found” claim overreaches. The paper does have a good point though: the recalcitrance of further improvement can’t be modeled as a constant, it necessarily scales with current system capability. Real world exponentials become sigmoids; mold growing in your fridge and a nuclear explosion are both sigmoids that look exponential at first: the difference is a matter of scale.
Really understanding the dynamics of a potential intelligence explosion requires digging deep into the specific details of an AGI design vs the brain in terms of inference/learning capabilities vs compute/energy efficiency, future hardware parameters, etc. Can’t show much with vague broad stroke abstractions.
No, you didn’t. You showed that there exists an upper bound on the amount of improvement that can be had from algorithmic changes in the limit. This is a very different claim. What we care about is what happens within the range close to human intelligence; it doesn’t matter that there’s a limit on how far recursive self-improvement can go, if that limit is far into the superhuman range. You equivocate between “recursive self-improvement must eventually stop somewhere”, which I believe is already widely accepted, and “recursive self-improvement will not happen”, which is a subject of significant controversy.
Agreed the quoted “we found” claim overreaches. The paper does have a good point though: the recalcitrance of further improvement can’t be modeled as a constant, it necessarily scales with current system capability. Real world exponentials become sigmoids; mold growing in your fridge and a nuclear explosion are both sigmoids that look exponential at first: the difference is a matter of scale.
Really understanding the dynamics of a potential intelligence explosion requires digging deep into the specific details of an AGI design vs the brain in terms of inference/learning capabilities vs compute/energy efficiency, future hardware parameters, etc. Can’t show much with vague broad stroke abstractions.