Assume that prior to 2040, a generalized intelligence test will be administered as follows. A team of three expert interviewers will interact with a candidate machine system (MS) and three humans (3H). The humans will be graduate students in each of physics, mathematics and computer science from one of the top 25 research universities (per some recognized list), chosen independently of the interviewers. The interviewers will electronically communicate (via text, image, spoken word, or other means) an identical series of exam questions of their choosing over a period of two hours to the MS and 3H, designed to advantage the 3H. Both MS and 3H have full access to the internet, but no party is allowed to consult additional humans, and we assume the MS is not an internet-accessible resource. The exam will be scored blindly by a disinterested third party.
Question resolves positively if the machine system outscores at least two of the three humans on such a test prior to 2040.
(I graduated in physics from a top-25 research university, and I’m not at all confident I’d pass this test myself.)
In any case, I wonder if it’s better to not overly focus on the question of “the right operational definition of human-level intelligence” and instead adopt Holden’s approach of talking about PASTA, in particular the last 2 sentences:
By “transformative AI,” I mean “AI powerful enough to bring us into a new, qualitatively different future.” The Industrial Revolution is the most recent example of a transformative event; others would include the Agricultural Revolution and the emergence of humans.2
This piece is going to focus on exploring a particular kind of AI I believe could be transformative: AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement. I will call this sort of technology Process for Automating Scientific and Technological Advancement, or PASTA.3 (I mean PASTA to refer to either a single system or a collection of systems that can collectively do this sort of automation.) … [some paragraphs on what PASTA can do]
By talking about PASTA, I’m partly trying to get rid of some unnecessary baggage in the debate over “artificial general intelligence.” I don’t think we need artificial general intelligence in order for this century to be the most important in history. Something narrower—as PASTA might be—would be plenty for that.
The Metaculus definition is very interesting as it is quite different from what M. Y. Zuo suggested to be the natural interpretation of “human-level intelligence”.
I like the PASTA suggestion, thanks for quoting that! However, I wonder whether that bar is a bit too high.
Human/Machine Intelligence Parity by 2040? on Metaculus has a pretty high bar for human-level intelligence:
(I graduated in physics from a top-25 research university, and I’m not at all confident I’d pass this test myself.)
In any case, I wonder if it’s better to not overly focus on the question of “the right operational definition of human-level intelligence” and instead adopt Holden’s approach of talking about PASTA, in particular the last 2 sentences:
The Metaculus definition is very interesting as it is quite different from what M. Y. Zuo suggested to be the natural interpretation of “human-level intelligence”.
I like the PASTA suggestion, thanks for quoting that! However, I wonder whether that bar is a bit too high.