My understanding is that “AGI” means AI that is able to do anything that a human can. GPT-3 can do well at some tasks (writing sentences) but presumably isn’t very good at driving a car or many other common human tasks.
GPT-3 can play Turing Test, and I think Turing was right to exclude things like “driving a car” from the definition of AGI. To quote:
We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane.
Given the success of transformer models at every other subfield of AI, I’m sure a GPT-like model would make a decent self driving car.
To be clear, I mean an AI which can pilot a car correctly on US roads 99% of seconds, which has already been achieved by many companies including Waymo, Tesla and Comma.ai. None of these companies have “self driving cars” in the sense of a legal, marketable L5 car, because the standard for legal, marketable and L5 is way beyond 99%.
My understanding is that “AGI” means AI that is able to do anything that a human can. GPT-3 can do well at some tasks (writing sentences) but presumably isn’t very good at driving a car or many other common human tasks.
GPT-3 can play Turing Test, and I think Turing was right to exclude things like “driving a car” from the definition of AGI. To quote:
Makes sense. I’ve always been using “G” more literally as “general”, but it’s certainly possible that this is unusual.
Given the success of transformer models at every other subfield of AI, I’m sure a GPT-like model would make a decent self driving car.
To be clear, I mean an AI which can pilot a car correctly on US roads 99% of seconds, which has already been achieved by many companies including Waymo, Tesla and Comma.ai. None of these companies have “self driving cars” in the sense of a legal, marketable L5 car, because the standard for legal, marketable and L5 is way beyond 99%.