My impression is that the growth rate is pretty underwhelming. But I don’t have hard data on the growth rate and this would totally change my mind if e.g. the population of self-driving cars was 10xing every year, or even doubling. (Currently it’s like, what, 500? And they only operate during certain hours in certain geofenced locations?)
Oh shit! So, seems like my million rides per day metric will be reached sometime in 2025? That is indeed somewhat faster than I expected. Updating, updating...
I know for Cruise they’re operating ~300 vehicles here in SF (I was previously under the impression this was a hard cap by law until the approval a few days ago but no longer sure of this). The geofence and hours vary by user but my understanding is the highest tier of users (maybe just employees?) have access to Cruise 24⁄7 with a geofence encompassing almost all of SF, and then there are lower tiers of users with various restrictions like tighter geofences and 9pm-5:30am hours. I don’t know what their growth plans look like now that they’ve been granted permission to expand.
OK, thanks. I’ll be curious to see how fast they grow. I guess I should admit that it does seem like ants are driving cars fairly well these days, so to speak. Any ideas on what tasks could be necessary for AI R&D automation, that are a lot harder than driving cars? So far I’ve got things like ‘coming up with new paradigms’ and ‘having good research taste for what experiments to run.’ That and long-horizon agency, though long-horizon agency doesn’t seem super necessary.
I chose it not particularly carefully, as a milestone that to me signified “it’s actually a profitable regular business now, it really works, there isn’t any ‘catch’ anymore like there has been for so many years.”
You said “AGI is nowhere close” but then I could be like “Have you heard of GPT-4?” and you’d be like “that doesn’t count for reasons X,Y,andZ” and I’d be like “I agree, GPT-4 sure is quite different in those ways from what was long prophesied in science fiction. Similarly, what Waymo is doing now in San Francisco is quite different from the dream of robotaxis.”
GPT-4 is an AGI, but not a stereotypical example of AGI as generally discussed on this forum for about a decade (no drive to do anything, no ability to self-improve), but basically an Oracle/tool AI, as per Karnofsky’s original proposal in the discussion with Eliezer here. The contrast between the apparent knowledge and the lack of drive to improve is confusing. if you recall, the main argument against a Tool AI was “to give accurate responses it will have to influence the world, not just remain passive”. GPT shows nothing of the sort. It will complete the next token the best it can, hallucinate pretty often, and does not care if it hallucinates even when it knows it does. I don’t think even untuned models like Sydney showed any interest in changing anything about their inputs. GPT is not an agentic AGI that we all had in mind for years, and not clearly close to being one.
In contrast, Waymo robotaxis are precisely the stereotypical example of a robotaxi: a vehicle that gets you from point A to point B without a human driver. It will be a profitable business, and how soon depends mostly on regulations, not on capabilities. There are still ways to go to improve reliability in edge cases, but it is already better than a human driver most of the time.
One argument is that state-of-the art generative models might become a lot more agentic if given real world sensors and actuators… say, put inside a robotaxi. Who knows, but it does not look obviously so at this point. There is some sort of AI controlling most autonomous driving vehicles already, I have no idea how advanced compared to GPT-4.
I feel like you are being unnecessarily obtuse—what about AutoGPT-4 with code execution and browsing etc.? It’s not an oracle/tool, it’s an agent.
Also, hallucinations are not an argument in favor of it being an oracle. Oracles supposedly just told you what they thought, they didn’t think one thing and say another.
I agree that there are differences between AutoGPT4 and classic AGI, and if you feel like they are bigger than the differences between current Waymo and classic robotaxi dreams, fair enough. But the difference in difference size ain’t THAT big, I say. I think it’s wrong to say “Robotaxis are already here… AGI is nowhere in sight.”
When do you think they’ll reach 1 million rides per day?
My impression is that the growth rate is pretty underwhelming. But I don’t have hard data on the growth rate and this would totally change my mind if e.g. the population of self-driving cars was 10xing every year, or even doubling. (Currently it’s like, what, 500? And they only operate during certain hours in certain geofenced locations?)
Here is some data through Matthew Barnett and Jess Riedl
Number of cumulative miles driven by Cruise’s autonomous cars is growing as an exponential at roughly 1 OOM per year.
https://twitter.com/MatthewJBar/status/1690102362394992640
Oh shit! So, seems like my million rides per day metric will be reached sometime in 2025? That is indeed somewhat faster than I expected. Updating, updating...
Thanks!
I know for Cruise they’re operating ~300 vehicles here in SF (I was previously under the impression this was a hard cap by law until the approval a few days ago but no longer sure of this). The geofence and hours vary by user but my understanding is the highest tier of users (maybe just employees?) have access to Cruise 24⁄7 with a geofence encompassing almost all of SF, and then there are lower tiers of users with various restrictions like tighter geofences and 9pm-5:30am hours. I don’t know what their growth plans look like now that they’ve been granted permission to expand.
OK, thanks. I’ll be curious to see how fast they grow. I guess I should admit that it does seem like ants are driving cars fairly well these days, so to speak. Any ideas on what tasks could be necessary for AI R&D automation, that are a lot harder than driving cars? So far I’ve got things like ‘coming up with new paradigms’ and ‘having good research taste for what experiments to run.’ That and long-horizon agency, though long-horizon agency doesn’t seem super necessary.
Why this particular number?
I chose it not particularly carefully, as a milestone that to me signified “it’s actually a profitable regular business now, it really works, there isn’t any ‘catch’ anymore like there has been for so many years.”
You said “AGI is nowhere close” but then I could be like “Have you heard of GPT-4?” and you’d be like “that doesn’t count for reasons X,Y,andZ” and I’d be like “I agree, GPT-4 sure is quite different in those ways from what was long prophesied in science fiction. Similarly, what Waymo is doing now in San Francisco is quite different from the dream of robotaxis.”
GPT-4 is an AGI, but not a stereotypical example of AGI as generally discussed on this forum for about a decade (no drive to do anything, no ability to self-improve), but basically an Oracle/tool AI, as per Karnofsky’s original proposal in the discussion with Eliezer here. The contrast between the apparent knowledge and the lack of drive to improve is confusing. if you recall, the main argument against a Tool AI was “to give accurate responses it will have to influence the world, not just remain passive”. GPT shows nothing of the sort. It will complete the next token the best it can, hallucinate pretty often, and does not care if it hallucinates even when it knows it does. I don’t think even untuned models like Sydney showed any interest in changing anything about their inputs. GPT is not an agentic AGI that we all had in mind for years, and not clearly close to being one.
In contrast, Waymo robotaxis are precisely the stereotypical example of a robotaxi: a vehicle that gets you from point A to point B without a human driver. It will be a profitable business, and how soon depends mostly on regulations, not on capabilities. There are still ways to go to improve reliability in edge cases, but it is already better than a human driver most of the time.
One argument is that state-of-the art generative models might become a lot more agentic if given real world sensors and actuators… say, put inside a robotaxi. Who knows, but it does not look obviously so at this point. There is some sort of AI controlling most autonomous driving vehicles already, I have no idea how advanced compared to GPT-4.
I feel like you are being unnecessarily obtuse—what about AutoGPT-4 with code execution and browsing etc.? It’s not an oracle/tool, it’s an agent.
Also, hallucinations are not an argument in favor of it being an oracle. Oracles supposedly just told you what they thought, they didn’t think one thing and say another.
I agree that there are differences between AutoGPT4 and classic AGI, and if you feel like they are bigger than the differences between current Waymo and classic robotaxi dreams, fair enough. But the difference in difference size ain’t THAT big, I say. I think it’s wrong to say “Robotaxis are already here… AGI is nowhere in sight.”
I guess there is no useful discussion possible after a statement like that.
(I edited the OP to be a bit more moderate & pointed at what I’m really interested in. My apologies for being so hasty and sloppy.)