XiXiDu: You should clarify this “human-level intelligence” concept, it seems to be systematically causing trouble. For example:
“By AI having ‘human-level intelligence’ we mean that it’s a system that’s about as good or better (perhaps unevenly) than humans (or small groups of humans) at activities such as programming, engineering and research.”
The idea of “human-level intelligence” inspired by science fiction or naive impressions from AI that refers to somewhat human-like AIs is pervasive enough that when better-informed people hear a term like “human-level intelligence”, they round up to this cliche and proceed with criticizing it.
I agree. I think the most important use of the concept is in question 3, and so for timeline purposes we can rephrase “human-level intelligence” as “human-level competence at improving its source code, combined with a structure that allows general intelligence.”
Question 3 would then read “What probability do you assign to the possibility of an AGI with human-level competence at improving its source code being able to self-modify its way up to massively superhuman skills in many areas within a matter of hours/days/< 5 years?”
I don’t think most AI researchers think of “improving its source code” as one of the benchmarks in an AI research program. Whether or not you think it is, asking them to identify a benchmark that they’ve actually thought about (I really like Nilson’s 80% of human jobs, especially since it jives well with a Hansonian singularity) seems more likely to get an informative response.
Might be worth specifying whether “human-level competence at improving its source code” here means “as good at improving source code as an average professional programmer,” “as good at improving source code as an average human,” “as good at improving source code as the best professional programmer,” or something else.
XiXiDu: You should clarify this “human-level intelligence” concept, it seems to be systematically causing trouble. For example:
The idea of “human-level intelligence” inspired by science fiction or naive impressions from AI that refers to somewhat human-like AIs is pervasive enough that when better-informed people hear a term like “human-level intelligence”, they round up to this cliche and proceed with criticizing it.
Agreed. But not all respondents trash the question just because it’s poorly phrased. Nils Nilsson writes:
I really like this guy.
I agree. I think the most important use of the concept is in question 3, and so for timeline purposes we can rephrase “human-level intelligence” as “human-level competence at improving its source code, combined with a structure that allows general intelligence.”
Question 3 would then read “What probability do you assign to the possibility of an AGI with human-level competence at improving its source code being able to self-modify its way up to massively superhuman skills in many areas within a matter of hours/days/< 5 years?”
I don’t think most AI researchers think of “improving its source code” as one of the benchmarks in an AI research program. Whether or not you think it is, asking them to identify a benchmark that they’ve actually thought about (I really like Nilson’s 80% of human jobs, especially since it jives well with a Hansonian singularity) seems more likely to get an informative response.
Might be worth specifying whether “human-level competence at improving its source code” here means “as good at improving source code as an average professional programmer,” “as good at improving source code as an average human,” “as good at improving source code as the best professional programmer,” or something else.