MIRI and other people thinking about strategies for ending the risk period use “how much physical experimentation is needed, how fast can the experiments be run, how much can they be parallelized, how hard is it to build and operate the equipment, etc.?” as one of the key criteria for evaluating strategies. The details depend on what technologies you think are most likely to be useful for addressing existential risk with AGI (which is not completely clear, though there are plausible ideas out there). We expect a lot of speed advantages from AGI, so the time cost of experiments is an important limiting factor.
Are there any organisations set up to research this kind of question (going into universities and studying research)? I’m wondering if we need a specialism called something like AI prediction, which aims to get this kind of data.
If this topic interests you, you may want to reach out to the Open Philanthropy Project, as they’re interested in supporting efforts to investigate these questions in a more serious way.
Hi Rob, I had hoped to find people I could support. I am interested in the question. I’ll see if I thnk it is more important than the other questions I am interested in.
They look like they are set up for researching existing literature and doing surveys, but they are not necessarily set up to do studies that collect data in labs.
The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence.
Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.
MIRI and other people thinking about strategies for ending the risk period use “how much physical experimentation is needed, how fast can the experiments be run, how much can they be parallelized, how hard is it to build and operate the equipment, etc.?” as one of the key criteria for evaluating strategies. The details depend on what technologies you think are most likely to be useful for addressing existential risk with AGI (which is not completely clear, though there are plausible ideas out there). We expect a lot of speed advantages from AGI, so the time cost of experiments is an important limiting factor.
Are there any organisations set up to research this kind of question (going into universities and studying research)? I’m wondering if we need a specialism called something like AI prediction, which aims to get this kind of data.
If this topic interests you, you may want to reach out to the Open Philanthropy Project, as they’re interested in supporting efforts to investigate these questions in a more serious way.
Hi Rob, I had hoped to find people I could support. I am interested in the question. I’ll see if I thnk it is more important than the other questions I am interested in.
AI impacts has done some research in this area, I think.
They look like they are set up for researching existing literature and doing surveys, but they are not necessarily set up to do studies that collect data in labs.
They are still part of the orient step, rather than the observation step.
But still lots of interesting things. Thanks for pointing me at them.
Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.
AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka “FOOM”) is entirely ungrounded.