I don’t have a solid answer, but I would be surprised if the task “Write the book ‘Superintelligence’” required less general intelligence than “full self-driving from NY to SF”.
I’m interested why you would think that writing “Superintelligence” would require less GI than full self-driving from NY to SF. The former seems like a pretty narrow task compared to the latter.
“AGI-Completeness” is the idea that a large class of tasks have the same difficulty, roughly analogous to “Turing-Completeness” and “NP-Completenes”.
My claim in the post is that I doubt OpenAI’s hope that the task “Alignment Research” will turn out to be strictly easier than any dangerous task.
My claim in my comment above refers to the relative difficulty of 2 tasks:
Make a contribution to Alignment Reseach comparable to the contribution of the book ‘Superintelligence’.
Drive from NY to SF without human intervention except for filling the gas tank etc.
I am willing to bet there won’t be a level of AI capability persisting more than 3 months where 1) is possible, but 2) is not possible.
I can’t give a really strong answer for this intuition. I could see a well-trained top-percentile chimpanzee having 0.1% probability of making the car-trip. I could not see any chimpanzee coming up with anything comparable to ‘Superintelligence’, no matter the circumstances.
I haven’t seen a rigorous treatment of the concept of AGI-completeness. Here are some suggested AGI complete problems:
Full self-driving
Optical detection of objects
Robust self-replicating machines
Robots collaborating with humans
I don’t have a solid answer, but I would be surprised if the task “Write the book ‘Superintelligence’” required less general intelligence than “full self-driving from NY to SF”.
I’m interested why you would think that writing “Superintelligence” would require less GI than full self-driving from NY to SF. The former seems like a pretty narrow task compared to the latter.
I was unclear. Let me elaborate:
“AGI-Completeness” is the idea that a large class of tasks have the same difficulty, roughly analogous to “Turing-Completeness” and “NP-Completenes”.
My claim in the post is that I doubt OpenAI’s hope that the task “Alignment Research” will turn out to be strictly easier than any dangerous task.
My claim in my comment above refers to the relative difficulty of 2 tasks:
Make a contribution to Alignment Reseach comparable to the contribution of the book ‘Superintelligence’.
Drive from NY to SF without human intervention except for filling the gas tank etc.
I am willing to bet there won’t be a level of AI capability persisting more than 3 months where 1) is possible, but 2) is not possible.
I can’t give a really strong answer for this intuition. I could see a well-trained top-percentile chimpanzee having 0.1% probability of making the car-trip. I could not see any chimpanzee coming up with anything comparable to ‘Superintelligence’, no matter the circumstances.