If you thought an AGI couldn’t be built what would you dedicate your life to doing? Perhaps another formulation, or a related question: what is the most important problem/issue not directly related to AI.
At the Singularity Summit, this question (or one similar) was asked, and (if I remember correctly) EY answer was something like: If the world didn’t need saving? Possibly writing science fiction.
That counterfactual seems like trouble. Do you mean literally impossible by the laws of physics (surely not)? Or highly improbable that humans will be able to build one? What counts as artificial intelligence—can we do human augmentation? What counts as “highly improbable”—can we really assume stupid or evil humans won’t be able to build one eventually?
It seems to me that plugging all the holes and ways of building a general intelligence to spec would require messing with the laws of physics. We may want to specify a Cosmic Censor law.
Yeah, it is trouble. Thats why I offered the other formulation, thought that might be too vague. Basically, I just wanted to know what non-transhumanist Eliezer would be doing. I don’t really care about the counterfactual some much as picking out a different topic area. Maybe the question should just be “If the idea of intelligence augmentation had never occurred to you and no one had ever shared it with you, what would you be doing with your life?”
If you thought an AGI couldn’t be built what would you dedicate your life to doing? Perhaps another formulation, or a related question: what is the most important problem/issue not directly related to AI.
At the Singularity Summit, this question (or one similar) was asked, and (if I remember correctly) EY answer was something like: If the world didn’t need saving? Possibly writing science fiction.
Cool. But say the world does need saving, would there be a way to do it that doesn’t involve putting something smarter than us in charge?
I’d be working on life extension. Followed by applied psychology and politics.
That counterfactual seems like trouble. Do you mean literally impossible by the laws of physics (surely not)? Or highly improbable that humans will be able to build one? What counts as artificial intelligence—can we do human augmentation? What counts as “highly improbable”—can we really assume stupid or evil humans won’t be able to build one eventually?
It seems to me that plugging all the holes and ways of building a general intelligence to spec would require messing with the laws of physics. We may want to specify a Cosmic Censor law.
Yeah, it is trouble. Thats why I offered the other formulation, thought that might be too vague. Basically, I just wanted to know what non-transhumanist Eliezer would be doing. I don’t really care about the counterfactual some much as picking out a different topic area. Maybe the question should just be “If the idea of intelligence augmentation had never occurred to you and no one had ever shared it with you, what would you be doing with your life?”