Strictly speaking about superhuman AGI: I believe you summarize the relative difficulty / impossibility of this task :) I can’t say I agree that the goal is void of human-values though (I’m talking about safety in particular—not sure if that’s make a difference?) --seems impractical right from the start?
I also think these considerations seem manageable though, when considering the narrow AI that we are producing as of today. But where’s the appetite to continue on the ANI road? I can’t really believe we wouldn’t want more of the same, in different fields of endeavor…
Strictly speaking about superhuman AGI: I believe you summarize the relative difficulty / impossibility of this task :) I can’t say I agree that the goal is void of human-values though (I’m talking about safety in particular—not sure if that’s make a difference?) --seems impractical right from the start?
I also think these considerations seem manageable though, when considering the narrow AI that we are producing as of today. But where’s the appetite to continue on the ANI road? I can’t really believe we wouldn’t want more of the same, in different fields of endeavor…