Yes, this is almost exactly it. I don’t expect frontier LLMs to carry out a complicated, multi-step process and recover from obstacles.
I think of this as the “squirrel bird feeder test”. Squirrels are ingenious and persistent problem solvers, capable of overcoming chains of complex obstacles. LLMs really can’t do this (though Devin is getting closer, if demos are to be believed).
Here’s a simple test: Ask an AI to open and manage a local pizza restaurant, buying kitchen equipment, dealing with contractors, selecting recipes, hiring human employees to serve or clean, registering the business, handling inspections, paying taxes, etc. None of these are expert-level skills. But frontier models are missing several key abilities. So I do not consider them AGI.
However, I agree that LLMs already have superhuman language skills in many areas. They have many, many parts of what’s needed to complete challenges like the above. (On principle, I won’t try to list what I think they’re missing.)
I fear the period between “actual AGI and weak ASI” will be extremely short. And I don’t actually believe there is any long-term way to control ASI.
I fear that most futures lead to a partially-aligned super-human intelligence with its own goals. And any actual control we have will be transitory.
Here’s a simple test: Ask an AI to open and manage a local pizza restaurant, buying kitchen equipment, dealing with contractors, selecting recipes, hiring human employees to serve or clean, registering the business, handling inspections, paying taxes, etc. None of these are expert-level skills. But frontier models are missing several key abilities. So I do not consider them AGI.
I agree that this is a thing current AI systems don’t/can’t do, and that aren’t considered expert-level skills for humans. I disagree that this is a simple test, or the kind of thing a typical human can do without lots of feedback, failures, or assistance. Many very smart humans fail at some or all of these tasks. They give up on starting a business, mess up their taxes, have a hard time navigating bureaucratic red tape, and don’t ever learn to cook. I agree that if an AI could do these things it would be much harder to argue against it being AGI, but it’s important to remember that many healthy, intelligent, adult humans can’t, at least not reliably. Also, remember that most restaurants fail within a couple of years even after making it through all these hoops. The rate is very high even for experienced restauranteurs doing the managing.
I suppose you could argue for a definition of general intelligence that excludes a substantial fraction of humans, but for many reasons I wouldn’t recommend it.
Yeah, the precise ability I’m trying to point to here is tricky. Almost any human (barring certain forms of senility, severe disability, etc) can do some version of what I’m talking about. But as in the restaurant example, not every human could succeed at every possible example.
I was trying to better describe the abilities that I thought GPT-4 was lacking, using very simple examples. And it started looking way too much like a benchmark suite that people could target.
Suffice to say, I don’t think GPT-4 is an AGI. But I strongly suspect we’re only a couple of breakthroughs away. And if anyone builds an AGI, I am not optimistic we will remain in control of our futures.
Yes, this is almost exactly it. I don’t expect frontier LLMs to carry out a complicated, multi-step process and recover from obstacles.
I think of this as the “squirrel bird feeder test”. Squirrels are ingenious and persistent problem solvers, capable of overcoming chains of complex obstacles. LLMs really can’t do this (though Devin is getting closer, if demos are to be believed).
Here’s a simple test: Ask an AI to open and manage a local pizza restaurant, buying kitchen equipment, dealing with contractors, selecting recipes, hiring human employees to serve or clean, registering the business, handling inspections, paying taxes, etc. None of these are expert-level skills. But frontier models are missing several key abilities. So I do not consider them AGI.
However, I agree that LLMs already have superhuman language skills in many areas. They have many, many parts of what’s needed to complete challenges like the above. (On principle, I won’t try to list what I think they’re missing.)
I fear the period between “actual AGI and weak ASI” will be extremely short. And I don’t actually believe there is any long-term way to control ASI.
I fear that most futures lead to a partially-aligned super-human intelligence with its own goals. And any actual control we have will be transitory.
I agree that this is a thing current AI systems don’t/can’t do, and that aren’t considered expert-level skills for humans. I disagree that this is a simple test, or the kind of thing a typical human can do without lots of feedback, failures, or assistance. Many very smart humans fail at some or all of these tasks. They give up on starting a business, mess up their taxes, have a hard time navigating bureaucratic red tape, and don’t ever learn to cook. I agree that if an AI could do these things it would be much harder to argue against it being AGI, but it’s important to remember that many healthy, intelligent, adult humans can’t, at least not reliably. Also, remember that most restaurants fail within a couple of years even after making it through all these hoops. The rate is very high even for experienced restauranteurs doing the managing.
I suppose you could argue for a definition of general intelligence that excludes a substantial fraction of humans, but for many reasons I wouldn’t recommend it.
Yeah, the precise ability I’m trying to point to here is tricky. Almost any human (barring certain forms of senility, severe disability, etc) can do some version of what I’m talking about. But as in the restaurant example, not every human could succeed at every possible example.
I was trying to better describe the abilities that I thought GPT-4 was lacking, using very simple examples. And it started looking way too much like a benchmark suite that people could target.
Suffice to say, I don’t think GPT-4 is an AGI. But I strongly suspect we’re only a couple of breakthroughs away. And if anyone builds an AGI, I am not optimistic we will remain in control of our futures.
Got it, makes sense, agreed.