I’m interested why you would think that writing “Superintelligence” would require less GI than full self-driving from NY to SF. The former seems like a pretty narrow task compared to the latter.
“AGI-Completeness” is the idea that a large class of tasks have the same difficulty, roughly analogous to “Turing-Completeness” and “NP-Completenes”.
My claim in the post is that I doubt OpenAI’s hope that the task “Alignment Research” will turn out to be strictly easier than any dangerous task.
My claim in my comment above refers to the relative difficulty of 2 tasks:
Make a contribution to Alignment Reseach comparable to the contribution of the book ‘Superintelligence’.
Drive from NY to SF without human intervention except for filling the gas tank etc.
I am willing to bet there won’t be a level of AI capability persisting more than 3 months where 1) is possible, but 2) is not possible.
I can’t give a really strong answer for this intuition. I could see a well-trained top-percentile chimpanzee having 0.1% probability of making the car-trip. I could not see any chimpanzee coming up with anything comparable to ‘Superintelligence’, no matter the circumstances.
I’m interested why you would think that writing “Superintelligence” would require less GI than full self-driving from NY to SF. The former seems like a pretty narrow task compared to the latter.
I was unclear. Let me elaborate:
“AGI-Completeness” is the idea that a large class of tasks have the same difficulty, roughly analogous to “Turing-Completeness” and “NP-Completenes”.
My claim in the post is that I doubt OpenAI’s hope that the task “Alignment Research” will turn out to be strictly easier than any dangerous task.
My claim in my comment above refers to the relative difficulty of 2 tasks:
Make a contribution to Alignment Reseach comparable to the contribution of the book ‘Superintelligence’.
Drive from NY to SF without human intervention except for filling the gas tank etc.
I am willing to bet there won’t be a level of AI capability persisting more than 3 months where 1) is possible, but 2) is not possible.
I can’t give a really strong answer for this intuition. I could see a well-trained top-percentile chimpanzee having 0.1% probability of making the car-trip. I could not see any chimpanzee coming up with anything comparable to ‘Superintelligence’, no matter the circumstances.