Eliezer: “Or an even tougher question: On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?”
But—if you’re right about the possibility of an intelligence explosion and the difficulty of the Friendliness problem, then building a novel AI is much, much more dangerous than creating novel physical conditions. Right?
Eliezer: “Or an even tougher question: On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?”
But—if you’re right about the possibility of an intelligence explosion and the difficulty of the Friendliness problem, then building a novel AI is much, much more dangerous than creating novel physical conditions. Right?