The attempt to analytically model the recalcitrance of Bayesian inference is an interesting idea, but I’m afraid it leaves out some of the key points. Reasoning is not just repeated applications of Bayes’ theorem. If it were, everyone would be equally smart except for processing speed and data availability. Rather, the key element is in coming up with good approximations for P(D|H) when data and memory are severely limited. This skill relies on much more than a fast processor, including things like simple but accurate models of the rest of the world, or knowing the correct algorithms to combine various truths into logical conclusions.
Some of it does fall into the category of having the correct prior beliefs, but they are hardly “accidentally gifted”—learning the correct priors, either from experience with data or through optimization “in a box” is a critical aspect of becoming intellectually capable. So the recalcitrance of prediction, though it clearly does eventually go to infinity in the absence of new data, is not obviously high. I would add also that for your argument against the intelligence explosion to hold, the recalcitrance of prediction would have to be not just “predictably high” but would need to increase at least linearly with intelligence in the range of interest—a very different claim, and one for which you have given little support.
I do think it’s likely that strictly limiting access to data would slow down an intelligence explosion. Bostrom argues that a “hardware overhang” could be exploited for a fast takeoff, but historically, advanced AI projects like AlphaGo or Watson have used state-of-the-art hardware during development, and this seems probable in the future as well. Data overhang, on the other hand, would be nearly impossible to avoid if the budding intelligence is given access to the internet, of which it can process only a small fraction in any reasonable amount of time.
The attempt to analytically model the recalcitrance of Bayesian inference is an interesting idea, but I’m afraid it leaves out some of the key points. Reasoning is not just repeated applications of Bayes’ theorem. If it were, everyone would be equally smart except for processing speed and data availability. Rather, the key element is in coming up with good approximations for P(D|H) when data and memory are severely limited. This skill relies on much more than a fast processor, including things like simple but accurate models of the rest of the world, or knowing the correct algorithms to combine various truths into logical conclusions.
Some of it does fall into the category of having the correct prior beliefs, but they are hardly “accidentally gifted”—learning the correct priors, either from experience with data or through optimization “in a box” is a critical aspect of becoming intellectually capable. So the recalcitrance of prediction, though it clearly does eventually go to infinity in the absence of new data, is not obviously high. I would add also that for your argument against the intelligence explosion to hold, the recalcitrance of prediction would have to be not just “predictably high” but would need to increase at least linearly with intelligence in the range of interest—a very different claim, and one for which you have given little support.
I do think it’s likely that strictly limiting access to data would slow down an intelligence explosion. Bostrom argues that a “hardware overhang” could be exploited for a fast takeoff, but historically, advanced AI projects like AlphaGo or Watson have used state-of-the-art hardware during development, and this seems probable in the future as well. Data overhang, on the other hand, would be nearly impossible to avoid if the budding intelligence is given access to the internet, of which it can process only a small fraction in any reasonable amount of time.