How tractable is technological progress (of the kind we might use AGI to automate) in general? More broadly, if you have (e.g.) AGI systems that can do the very rough equivalent of 1000 serial years of cognitive work by 10 collaborating human scientists over the span of a couple of years, how much progress can those systems make on consequential real-world problems?
How much science is cognitive work vs running an experiment in the real world? Have there been attempts to quantify that?
MIRI and other people thinking about strategies for ending the risk period use “how much physical experimentation is needed, how fast can the experiments be run, how much can they be parallelized, how hard is it to build and operate the equipment, etc.?” as one of the key criteria for evaluating strategies. The details depend on what technologies you think are most likely to be useful for addressing existential risk with AGI (which is not completely clear, though there are plausible ideas out there). We expect a lot of speed advantages from AGI, so the time cost of experiments is an important limiting factor.
Are there any organisations set up to research this kind of question (going into universities and studying research)? I’m wondering if we need a specialism called something like AI prediction, which aims to get this kind of data.
If this topic interests you, you may want to reach out to the Open Philanthropy Project, as they’re interested in supporting efforts to investigate these questions in a more serious way.
Hi Rob, I had hoped to find people I could support. I am interested in the question. I’ll see if I thnk it is more important than the other questions I am interested in.
They look like they are set up for researching existing literature and doing surveys, but they are not necessarily set up to do studies that collect data in labs.
The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence.
Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.
Dawkins’s “Middle World” idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.
I agree that research can probably be improved upon quickly and easily. Lab on a chip is obviously a way we are doing that currently. If an AGI system has got the backing of a large company or country and can get new things fabbed in secrecy it can improve on these kinds of things. But I still think it is worth trying to quantify things. We can get ideas about stealth scenarios where the AGI is being developed by non-state/megacorp actors that can’t easily fab new things. We can also get ideas about how useful things like lab on a chip are for speeding up the relevant science. Are we out of low hanging fruit and is it taking us more effort to find novel interesting chemicals/materials?
How much science is cognitive work vs running an experiment in the real world? Have there been attempts to quantify that?
MIRI and other people thinking about strategies for ending the risk period use “how much physical experimentation is needed, how fast can the experiments be run, how much can they be parallelized, how hard is it to build and operate the equipment, etc.?” as one of the key criteria for evaluating strategies. The details depend on what technologies you think are most likely to be useful for addressing existential risk with AGI (which is not completely clear, though there are plausible ideas out there). We expect a lot of speed advantages from AGI, so the time cost of experiments is an important limiting factor.
Are there any organisations set up to research this kind of question (going into universities and studying research)? I’m wondering if we need a specialism called something like AI prediction, which aims to get this kind of data.
If this topic interests you, you may want to reach out to the Open Philanthropy Project, as they’re interested in supporting efforts to investigate these questions in a more serious way.
Hi Rob, I had hoped to find people I could support. I am interested in the question. I’ll see if I thnk it is more important than the other questions I am interested in.
AI impacts has done some research in this area, I think.
They look like they are set up for researching existing literature and doing surveys, but they are not necessarily set up to do studies that collect data in labs.
They are still part of the orient step, rather than the observation step.
But still lots of interesting things. Thanks for pointing me at them.
Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.
Dawkins’s “Middle World” idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.
I agree that research can probably be improved upon quickly and easily. Lab on a chip is obviously a way we are doing that currently. If an AGI system has got the backing of a large company or country and can get new things fabbed in secrecy it can improve on these kinds of things.
But I still think it is worth trying to quantify things. We can get ideas about stealth scenarios where the AGI is being developed by non-state/megacorp actors that can’t easily fab new things. We can also get ideas about how useful things like lab on a chip are for speeding up the relevant science. Are we out of low hanging fruit and is it taking us more effort to find novel interesting chemicals/materials?