People aren’t assuming that AI exceeding human intelligence “equates with that occurring with great speed”; they’re arguing for the latter point separately. E.g., see:
Or, for a much quicker and more impressionistic argument, this post on FB.
Another simple argument: “Human cognition didn’t evolve to do biochemistry, nuclear engineering, or computer science; those capabilities just ‘came for free’ with the very different set of cognitive problems human brains evolved to solve in our environment of evolutionary adaptedness. This suggests that there’s such a thing as ‘general intelligence’ in the sense that there’s a kind of reasoning that lets you learn all those sciences without needing an engineer to specially design new brains or new brain modules for each new domain; and it’s the kind of capacity that a blind engineering process like natural selection was able to stumble on while ‘trying’ to solve a very different set of problems.”
Some other threads that bear directly on this question include:
What’s the track record within AI, or in automation in general? When engineers try to outperform biology on some specific task (and especially on cognitive tasks), how often do they hit a wall at par-biology performance; and when they don’t hit a wall, how often do they quickly shoot past biological performance on the intended dimension?
Are humans likely to be near an intelligence ceiling, or near a point where evolution was hitting diminishing returns (for reasons that generalize to AI)?
How hardware-intensive is AGI likely to be? How does this vary for, e.g., 10-year versus 30-year timelines?
Along how many dimensions might AGI improve on human intelligence? How likely is it that early AGI systems will be able to realize some of these improvements, and to what degree; and how easy is it likely to be to leverage easier advantages to achieve harder ones?
How tractable is technological progress (of the kind we might use AGI to automate) in general? More broadly, if you have (e.g.) AGI systems that can do the very rough equivalent of 1000 serial years of cognitive work by 10 collaborating human scientists over the span of a couple of years, how much progress can those systems make on consequential real-world problems?
If large rapid capability gains are available, how likely is it that actors will be willing (and able) to go slow? Instrumental convergence and Gwern’s post on tool AIs are relevant here.
Each of these is a big topic in its own right. I’m noting all these different threads because I want to be clear about how many different directions you can go in if you’re curious about this; obviously feel free to pick just one thread and start the discussion there, though, since all of this can be a lot to try to cover simultaneously, and it’s useful to ask questions and start hashing things out before you’ve read literally everything that’s been written on the topic.
On the same topic, see also my paper How Feasible is the Rapid Development of Artificial Superintelligence (recently accepted for publication in the 21st Century Frontiers focus issue of Physica Scripta), in which I argue that the things that we know about human expertise and intelligence seem to suggest that the process of scaling up from human-level intelligence to superhuman qualitative intelligence might be relatively fast and simple.
How tractable is technological progress (of the kind we might use AGI to automate) in general? More broadly, if you have (e.g.) AGI systems that can do the very rough equivalent of 1000 serial years of cognitive work by 10 collaborating human scientists over the span of a couple of years, how much progress can those systems make on consequential real-world problems?
How much science is cognitive work vs running an experiment in the real world? Have there been attempts to quantify that?
MIRI and other people thinking about strategies for ending the risk period use “how much physical experimentation is needed, how fast can the experiments be run, how much can they be parallelized, how hard is it to build and operate the equipment, etc.?” as one of the key criteria for evaluating strategies. The details depend on what technologies you think are most likely to be useful for addressing existential risk with AGI (which is not completely clear, though there are plausible ideas out there). We expect a lot of speed advantages from AGI, so the time cost of experiments is an important limiting factor.
Are there any organisations set up to research this kind of question (going into universities and studying research)? I’m wondering if we need a specialism called something like AI prediction, which aims to get this kind of data.
If this topic interests you, you may want to reach out to the Open Philanthropy Project, as they’re interested in supporting efforts to investigate these questions in a more serious way.
Hi Rob, I had hoped to find people I could support. I am interested in the question. I’ll see if I thnk it is more important than the other questions I am interested in.
They look like they are set up for researching existing literature and doing surveys, but they are not necessarily set up to do studies that collect data in labs.
The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence.
Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.
Dawkins’s “Middle World” idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.
I agree that research can probably be improved upon quickly and easily. Lab on a chip is obviously a way we are doing that currently. If an AGI system has got the backing of a large company or country and can get new things fabbed in secrecy it can improve on these kinds of things. But I still think it is worth trying to quantify things. We can get ideas about stealth scenarios where the AGI is being developed by non-state/megacorp actors that can’t easily fab new things. We can also get ideas about how useful things like lab on a chip are for speeding up the relevant science. Are we out of low hanging fruit and is it taking us more effort to find novel interesting chemicals/materials?
People aren’t assuming that AI exceeding human intelligence “equates with that occurring with great speed”; they’re arguing for the latter point separately. E.g., see:
Yudkowsky’s Intelligence Explosion Microeconomics
Hanson and Yudkowsky’s AI-Foom Debate
Bostrom’s Superintelligence
Or, for a much quicker and more impressionistic argument, this post on FB.
Another simple argument: “Human cognition didn’t evolve to do biochemistry, nuclear engineering, or computer science; those capabilities just ‘came for free’ with the very different set of cognitive problems human brains evolved to solve in our environment of evolutionary adaptedness. This suggests that there’s such a thing as ‘general intelligence’ in the sense that there’s a kind of reasoning that lets you learn all those sciences without needing an engineer to specially design new brains or new brain modules for each new domain; and it’s the kind of capacity that a blind engineering process like natural selection was able to stumble on while ‘trying’ to solve a very different set of problems.”
Some other threads that bear directly on this question include:
What’s the track record within AI, or in automation in general? When engineers try to outperform biology on some specific task (and especially on cognitive tasks), how often do they hit a wall at par-biology performance; and when they don’t hit a wall, how often do they quickly shoot past biological performance on the intended dimension?
Are humans likely to be near an intelligence ceiling, or near a point where evolution was hitting diminishing returns (for reasons that generalize to AI)?
How hardware-intensive is AGI likely to be? How does this vary for, e.g., 10-year versus 30-year timelines?
Along how many dimensions might AGI improve on human intelligence? How likely is it that early AGI systems will be able to realize some of these improvements, and to what degree; and how easy is it likely to be to leverage easier advantages to achieve harder ones?
How tractable is technological progress (of the kind we might use AGI to automate) in general? More broadly, if you have (e.g.) AGI systems that can do the very rough equivalent of 1000 serial years of cognitive work by 10 collaborating human scientists over the span of a couple of years, how much progress can those systems make on consequential real-world problems?
If large rapid capability gains are available, how likely is it that actors will be willing (and able) to go slow? Instrumental convergence and Gwern’s post on tool AIs are relevant here.
Each of these is a big topic in its own right. I’m noting all these different threads because I want to be clear about how many different directions you can go in if you’re curious about this; obviously feel free to pick just one thread and start the discussion there, though, since all of this can be a lot to try to cover simultaneously, and it’s useful to ask questions and start hashing things out before you’ve read literally everything that’s been written on the topic.
On the same topic, see also my paper How Feasible is the Rapid Development of Artificial Superintelligence (recently accepted for publication in the 21st Century Frontiers focus issue of Physica Scripta), in which I argue that the things that we know about human expertise and intelligence seem to suggest that the process of scaling up from human-level intelligence to superhuman qualitative intelligence might be relatively fast and simple.
How much science is cognitive work vs running an experiment in the real world? Have there been attempts to quantify that?
MIRI and other people thinking about strategies for ending the risk period use “how much physical experimentation is needed, how fast can the experiments be run, how much can they be parallelized, how hard is it to build and operate the equipment, etc.?” as one of the key criteria for evaluating strategies. The details depend on what technologies you think are most likely to be useful for addressing existential risk with AGI (which is not completely clear, though there are plausible ideas out there). We expect a lot of speed advantages from AGI, so the time cost of experiments is an important limiting factor.
Are there any organisations set up to research this kind of question (going into universities and studying research)? I’m wondering if we need a specialism called something like AI prediction, which aims to get this kind of data.
If this topic interests you, you may want to reach out to the Open Philanthropy Project, as they’re interested in supporting efforts to investigate these questions in a more serious way.
Hi Rob, I had hoped to find people I could support. I am interested in the question. I’ll see if I thnk it is more important than the other questions I am interested in.
AI impacts has done some research in this area, I think.
They look like they are set up for researching existing literature and doing surveys, but they are not necessarily set up to do studies that collect data in labs.
They are still part of the orient step, rather than the observation step.
But still lots of interesting things. Thanks for pointing me at them.
Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.
Dawkins’s “Middle World” idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.
I agree that research can probably be improved upon quickly and easily. Lab on a chip is obviously a way we are doing that currently. If an AGI system has got the backing of a large company or country and can get new things fabbed in secrecy it can improve on these kinds of things.
But I still think it is worth trying to quantify things. We can get ideas about stealth scenarios where the AGI is being developed by non-state/megacorp actors that can’t easily fab new things. We can also get ideas about how useful things like lab on a chip are for speeding up the relevant science. Are we out of low hanging fruit and is it taking us more effort to find novel interesting chemicals/materials?