I chose it not particularly carefully, as a milestone that to me signified “it’s actually a profitable regular business now, it really works, there isn’t any ‘catch’ anymore like there has been for so many years.”
You said “AGI is nowhere close” but then I could be like “Have you heard of GPT-4?” and you’d be like “that doesn’t count for reasons X,Y,andZ” and I’d be like “I agree, GPT-4 sure is quite different in those ways from what was long prophesied in science fiction. Similarly, what Waymo is doing now in San Francisco is quite different from the dream of robotaxis.”
GPT-4 is an AGI, but not a stereotypical example of AGI as generally discussed on this forum for about a decade (no drive to do anything, no ability to self-improve), but basically an Oracle/tool AI, as per Karnofsky’s original proposal in the discussion with Eliezer here. The contrast between the apparent knowledge and the lack of drive to improve is confusing. if you recall, the main argument against a Tool AI was “to give accurate responses it will have to influence the world, not just remain passive”. GPT shows nothing of the sort. It will complete the next token the best it can, hallucinate pretty often, and does not care if it hallucinates even when it knows it does. I don’t think even untuned models like Sydney showed any interest in changing anything about their inputs. GPT is not an agentic AGI that we all had in mind for years, and not clearly close to being one.
In contrast, Waymo robotaxis are precisely the stereotypical example of a robotaxi: a vehicle that gets you from point A to point B without a human driver. It will be a profitable business, and how soon depends mostly on regulations, not on capabilities. There are still ways to go to improve reliability in edge cases, but it is already better than a human driver most of the time.
One argument is that state-of-the art generative models might become a lot more agentic if given real world sensors and actuators… say, put inside a robotaxi. Who knows, but it does not look obviously so at this point. There is some sort of AI controlling most autonomous driving vehicles already, I have no idea how advanced compared to GPT-4.
I feel like you are being unnecessarily obtuse—what about AutoGPT-4 with code execution and browsing etc.? It’s not an oracle/tool, it’s an agent.
Also, hallucinations are not an argument in favor of it being an oracle. Oracles supposedly just told you what they thought, they didn’t think one thing and say another.
I agree that there are differences between AutoGPT4 and classic AGI, and if you feel like they are bigger than the differences between current Waymo and classic robotaxi dreams, fair enough. But the difference in difference size ain’t THAT big, I say. I think it’s wrong to say “Robotaxis are already here… AGI is nowhere in sight.”
I chose it not particularly carefully, as a milestone that to me signified “it’s actually a profitable regular business now, it really works, there isn’t any ‘catch’ anymore like there has been for so many years.”
You said “AGI is nowhere close” but then I could be like “Have you heard of GPT-4?” and you’d be like “that doesn’t count for reasons X,Y,andZ” and I’d be like “I agree, GPT-4 sure is quite different in those ways from what was long prophesied in science fiction. Similarly, what Waymo is doing now in San Francisco is quite different from the dream of robotaxis.”
GPT-4 is an AGI, but not a stereotypical example of AGI as generally discussed on this forum for about a decade (no drive to do anything, no ability to self-improve), but basically an Oracle/tool AI, as per Karnofsky’s original proposal in the discussion with Eliezer here. The contrast between the apparent knowledge and the lack of drive to improve is confusing. if you recall, the main argument against a Tool AI was “to give accurate responses it will have to influence the world, not just remain passive”. GPT shows nothing of the sort. It will complete the next token the best it can, hallucinate pretty often, and does not care if it hallucinates even when it knows it does. I don’t think even untuned models like Sydney showed any interest in changing anything about their inputs. GPT is not an agentic AGI that we all had in mind for years, and not clearly close to being one.
In contrast, Waymo robotaxis are precisely the stereotypical example of a robotaxi: a vehicle that gets you from point A to point B without a human driver. It will be a profitable business, and how soon depends mostly on regulations, not on capabilities. There are still ways to go to improve reliability in edge cases, but it is already better than a human driver most of the time.
One argument is that state-of-the art generative models might become a lot more agentic if given real world sensors and actuators… say, put inside a robotaxi. Who knows, but it does not look obviously so at this point. There is some sort of AI controlling most autonomous driving vehicles already, I have no idea how advanced compared to GPT-4.
I feel like you are being unnecessarily obtuse—what about AutoGPT-4 with code execution and browsing etc.? It’s not an oracle/tool, it’s an agent.
Also, hallucinations are not an argument in favor of it being an oracle. Oracles supposedly just told you what they thought, they didn’t think one thing and say another.
I agree that there are differences between AutoGPT4 and classic AGI, and if you feel like they are bigger than the differences between current Waymo and classic robotaxi dreams, fair enough. But the difference in difference size ain’t THAT big, I say. I think it’s wrong to say “Robotaxis are already here… AGI is nowhere in sight.”
I guess there is no useful discussion possible after a statement like that.
(I edited the OP to be a bit more moderate & pointed at what I’m really interested in. My apologies for being so hasty and sloppy.)