I strong-upvoted this, but I fear you won’t see a lot of traction on this forum for this idea.
I have a vague understanding of why, but I don’t think I heard compelling enough reasons from other LWers yet. If someone has some, I’d be happy to read them or be pointed towards them.
I value empiricism highly, i.e. putting ideas into action to be tested against the universe; but I think I’ve read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
Please don’t consider this very vague recollection as anywhere close to a steelman.
I think this was motivated by how much bits of information can be taken in even with human-like senses, and how a single bit of information can halve a set of hypotheses. And where I did not see sufficient motivation for this argument for yet: this can indeed be true for very valuable bits of information, but are we assuming that any entity will easily be able to receive those very valuable bits? Surely a lot of bits are redundant and give no novel information, and some bits are very costly to attain. Sometimes you are lucky if you can even just so much as eliminate a single potential hypothesis, and even that is costly and requires interacting with the universe instead of just passively observing it.
But let’s hear it from others!
(I’m not sure if this spectrum of positions have any accepted names, maybe rationalist vs empiricist?)
I think I’ve read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
EY is probably wrong. While more intelligence allows performing deeper analysis, which can sometimes extract the independent variables from a complex problem, or find the right action, from less data, there are limits. When there are thousands of variables and finite and noisy data (like most medical data), superintelligences will very likely be almost as stymied as humans are*.
Of course, what a superintelligence could do is ask for the smallest number of experiments to deconfuse the various competing theories, and/or analyze far more data than any living human is capable of. A superintelligence could recalculate their priors or flush their priors. They could ultimately solve medical problems at a pace that humans cannot.
*another way to look at it. Imagine a ‘sherlock holmes’ set of reasoning. Now realize that for every branch in a story where sherlock ’deduces that this pipe tobacco combined with these footprints mean...” there are thousands of other valid possibilities that also fit the data. Weak data creates a very large number of permutations of valid world states consistent with it. A human may get “stuck” on the wrong branch, lacking the cognitive capacity to consider the others, while a superintelligence may be able to consider thousands of the possibilities in memory. Either way, neither knows which branches are correct.
What EY is correct is a superintelligence could then consider many possible experiments, and find the ones that have the most information gain. Perfect experiments that give perfect clean bits reduce the number of permutations by half with each clean bit of information gain. (note that EY, again, is probably wrong in that there may often not be experiments that produce data that clean)
I value empiricism highly, i.e. putting ideas into action to be tested against the universe; but I think I’ve read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
I strong-upvoted this, but I fear you won’t see a lot of traction on this forum for this idea.
I have a vague understanding of why, but I don’t think I heard compelling enough reasons from other LWers yet. If someone has some, I’d be happy to read them or be pointed towards them.
I value empiricism highly, i.e. putting ideas into action to be tested against the universe; but I think I’ve read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
Please don’t consider this very vague recollection as anywhere close to a steelman.
I think this was motivated by how much bits of information can be taken in even with human-like senses, and how a single bit of information can halve a set of hypotheses. And where I did not see sufficient motivation for this argument for yet: this can indeed be true for very valuable bits of information, but are we assuming that any entity will easily be able to receive those very valuable bits? Surely a lot of bits are redundant and give no novel information, and some bits are very costly to attain. Sometimes you are lucky if you can even just so much as eliminate a single potential hypothesis, and even that is costly and requires interacting with the universe instead of just passively observing it.
But let’s hear it from others!
(I’m not sure if this spectrum of positions have any accepted names, maybe rationalist vs empiricist?)
I think I’ve read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
EY is probably wrong. While more intelligence allows performing deeper analysis, which can sometimes extract the independent variables from a complex problem, or find the right action, from less data, there are limits. When there are thousands of variables and finite and noisy data (like most medical data), superintelligences will very likely be almost as stymied as humans are*.
Of course, what a superintelligence could do is ask for the smallest number of experiments to deconfuse the various competing theories, and/or analyze far more data than any living human is capable of. A superintelligence could recalculate their priors or flush their priors. They could ultimately solve medical problems at a pace that humans cannot.
*another way to look at it. Imagine a ‘sherlock holmes’ set of reasoning. Now realize that for every branch in a story where sherlock ’deduces that this pipe tobacco combined with these footprints mean...” there are thousands of other valid possibilities that also fit the data. Weak data creates a very large number of permutations of valid world states consistent with it. A human may get “stuck” on the wrong branch, lacking the cognitive capacity to consider the others, while a superintelligence may be able to consider thousands of the possibilities in memory. Either way, neither knows which branches are correct.
What EY is correct is a superintelligence could then consider many possible experiments, and find the ones that have the most information gain. Perfect experiments that give perfect clean bits reduce the number of permutations by half with each clean bit of information gain. (note that EY, again, is probably wrong in that there may often not be experiments that produce data that clean)
Most notably, this.
Also this, and in this comment on it.