I mostly agree with the points written here. It’s actually on the (Section A; Point1) that I’d like to have more clarification on:
AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains
When we have AGI working on hard research problems, it sounds akin to decades of human-level research compressed up into just a few days or maybe even less, perhaps. That may be possible, but often, the bottleneck is not the theoretical framework or proposed hypothesis, but waiting for experimental proof. If we say that an AGI will be a more rational agent than humans, do we not expect it to try to accumulate more experimental proof to test the theory to estimate, for example, the expected utility of pursuing a novel course of action?
I think there would still be some constraints to this process. For example, humans often wait until the experimental proof has accumulated enough to validate certain theories (for example, the Large Hadron Collider Project, the Photoelectric effect, etc). We need to observe nature to gather proof that the theory doesn’t fail in scenarios we expect it to fail. To accumulate such proof, we might build new instruments to gather new types of data to validate the theory on the now-larger set of available data. Sometimes that process can take years. Just because AGI will be smarter than humans, can we say that it’ll be making proportionately faster breakthroughs in research?
I think Yudkowsky would argue that on a scale from never learning anything to eliminating half your hypotheses per bit of novel sensory information, humans are pretty much at the bottom of the barrel.
When the AI needs to observe nature, it can rely on petabytes of publicly available datasets from particle physics to biochemistry to galactic surveys. It doesn’t need any more experimental evidence to solve human physiology or build biological nanobots: we’ve already got quantum mechanics and human DNA sequences. The rest is just derivation of the consequences.
Sure, there are specific physical hypotheses that the AGI can’t rule out because humanity hasn’t gathered the evidence for them. But that, by definition, excludes anything that has ever observably affected humans. So yes, for anything that has existed since the inflationary period, the AGI will not be bottlenecked on physically gathering evidence.
I don’t really get what you’re pointing at with “how much AGI will be smarter than humans”, so I can’t really answer your last question. How much smarter than yourself would you say someone like Euler is than yourself? Is his ability to do scientific/mathematical breakthroughs proportional to your difference in smarts?
I assumed that there will come a time when the AGI has exhausted consuming all available human-collected knowledge and data.
My reasoning for the comment was something like
“Okay, what if AGI happens before we’ve understood the dark matter and dark energy? AGI has incomplete models of these concepts (Assuming that it’s not able to develop a full picture from available data—that may well be the case, but for a placeholder, I’m using dark energy. It could be some other concept we only discover in the year prior to the AGI creation and have relatively fewer data about), and it has a choice to either use existing technology (or create better using existing principles), or carry out research into dark energy and see how it can be harnessed, given reasons to believe that the end-solution would be far more efficient than the currently possible solutions.
There might be types of data that we never bothered capturing which might’ve been useful or even essential for building a robust understanding of certain aspects of nature. It might pursue those data-capturing tasks, which might be bottlenecked by the amount of data needed, the time to collect data, etc (though far less than what humans would require).”
Thank you for sharing the link. I had misunderstood what the point meant, but now I see. My speculation for the original comment was based on a naive understanding. This post you linked is excellent and I’d recommend everyone to give it a read.
I mostly agree with the points written here. It’s actually on the (Section A; Point1) that I’d like to have more clarification on:
When we have AGI working on hard research problems, it sounds akin to decades of human-level research compressed up into just a few days or maybe even less, perhaps. That may be possible, but often, the bottleneck is not the theoretical framework or proposed hypothesis, but waiting for experimental proof. If we say that an AGI will be a more rational agent than humans, do we not expect it to try to accumulate more experimental proof to test the theory to estimate, for example, the expected utility of pursuing a novel course of action?
I think there would still be some constraints to this process. For example, humans often wait until the experimental proof has accumulated enough to validate certain theories (for example, the Large Hadron Collider Project, the Photoelectric effect, etc). We need to observe nature to gather proof that the theory doesn’t fail in scenarios we expect it to fail. To accumulate such proof, we might build new instruments to gather new types of data to validate the theory on the now-larger set of available data. Sometimes that process can take years. Just because AGI will be smarter than humans, can we say that it’ll be making proportionately faster breakthroughs in research?
I think Yudkowsky would argue that on a scale from never learning anything to eliminating half your hypotheses per bit of novel sensory information, humans are pretty much at the bottom of the barrel.
When the AI needs to observe nature, it can rely on petabytes of publicly available datasets from particle physics to biochemistry to galactic surveys. It doesn’t need any more experimental evidence to solve human physiology or build biological nanobots: we’ve already got quantum mechanics and human DNA sequences. The rest is just derivation of the consequences.
Sure, there are specific physical hypotheses that the AGI can’t rule out because humanity hasn’t gathered the evidence for them. But that, by definition, excludes anything that has ever observably affected humans. So yes, for anything that has existed since the inflationary period, the AGI will not be bottlenecked on physically gathering evidence.
I don’t really get what you’re pointing at with “how much AGI will be smarter than humans”, so I can’t really answer your last question. How much smarter than yourself would you say someone like Euler is than yourself? Is his ability to do scientific/mathematical breakthroughs proportional to your difference in smarts?
I assumed that there will come a time when the AGI has exhausted consuming all available human-collected knowledge and data.
My reasoning for the comment was something like
“Okay, what if AGI happens before we’ve understood the dark matter and dark energy? AGI has incomplete models of these concepts (Assuming that it’s not able to develop a full picture from available data—that may well be the case, but for a placeholder, I’m using dark energy. It could be some other concept we only discover in the year prior to the AGI creation and have relatively fewer data about), and it has a choice to either use existing technology (or create better using existing principles), or carry out research into dark energy and see how it can be harnessed, given reasons to believe that the end-solution would be far more efficient than the currently possible solutions.
There might be types of data that we never bothered capturing which might’ve been useful or even essential for building a robust understanding of certain aspects of nature. It might pursue those data-capturing tasks, which might be bottlenecked by the amount of data needed, the time to collect data, etc (though far less than what humans would require).”
Thank you for sharing the link. I had misunderstood what the point meant, but now I see. My speculation for the original comment was based on a naive understanding. This post you linked is excellent and I’d recommend everyone to give it a read.