I agree. I think the most important use of the concept is in question 3, and so for timeline purposes we can rephrase “human-level intelligence” as “human-level competence at improving its source code, combined with a structure that allows general intelligence.”
Question 3 would then read “What probability do you assign to the possibility of an AGI with human-level competence at improving its source code being able to self-modify its way up to massively superhuman skills in many areas within a matter of hours/days/< 5 years?”
I don’t think most AI researchers think of “improving its source code” as one of the benchmarks in an AI research program. Whether or not you think it is, asking them to identify a benchmark that they’ve actually thought about (I really like Nilson’s 80% of human jobs, especially since it jives well with a Hansonian singularity) seems more likely to get an informative response.
Might be worth specifying whether “human-level competence at improving its source code” here means “as good at improving source code as an average professional programmer,” “as good at improving source code as an average human,” “as good at improving source code as the best professional programmer,” or something else.
I agree. I think the most important use of the concept is in question 3, and so for timeline purposes we can rephrase “human-level intelligence” as “human-level competence at improving its source code, combined with a structure that allows general intelligence.”
Question 3 would then read “What probability do you assign to the possibility of an AGI with human-level competence at improving its source code being able to self-modify its way up to massively superhuman skills in many areas within a matter of hours/days/< 5 years?”
I don’t think most AI researchers think of “improving its source code” as one of the benchmarks in an AI research program. Whether or not you think it is, asking them to identify a benchmark that they’ve actually thought about (I really like Nilson’s 80% of human jobs, especially since it jives well with a Hansonian singularity) seems more likely to get an informative response.
Might be worth specifying whether “human-level competence at improving its source code” here means “as good at improving source code as an average professional programmer,” “as good at improving source code as an average human,” “as good at improving source code as the best professional programmer,” or something else.