I wrote about this previously here. I think you have to break it down by company; the answer for why they’re not globally available is different for the different companies.
For Waymo, they have self-driving taxis in SF and Phoenix without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task “driving but your eyes are laser rangefinders”. The reason they haven’t scaled to cover every city, or at least more cities, is unclear to me; the obvious possibilities are that the LIDAR sensors and onboard computers are impractically expensive, that they have a surprisingly high manual-override and there’s a big unscalable call center somewhere, or they’re being cowardly and trying to maintain zero fatalities forever (at scales where a comparable fleet of human-driven taxis would definitely have some fatalities). In any case, I don’t think the software/neural nets are likely to be the bottleneck.
For Tesla, until recently, they were using surprisingly-low-resolution cameras. So instead of the cognitive task of driving as a human would solve it, they substituted the harder task “driving with a vision impairment and no glasses”. They did upgrade the cameras within the past year, but it’s hard to tell how much of the customer feedback represents the current hardware version vs. past versions; sites like FSDBeta Community Tracker don’t really distinguish. It also seems likely that their onboard GPUs are underpowered relative to the task.
As for Cruise, Comma.ai, and others—well, distance-to-AGI is measured only from the market leader, and just as GPT-4, Claude and Bard have a long tail of inferior models by other orgs trailing behind them, you also expect a long tail of self-driving systems with worse disengagement rates than the leaders.
I wrote about this previously here. I think you have to break it down by company; the answer for why they’re not globally available is different for the different companies.
For Waymo, they have self-driving taxis in SF and Phoenix without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task “driving but your eyes are laser rangefinders”. The reason they haven’t scaled to cover every city, or at least more cities, is unclear to me; the obvious possibilities are that the LIDAR sensors and onboard computers are impractically expensive, that they have a surprisingly high manual-override and there’s a big unscalable call center somewhere, or they’re being cowardly and trying to maintain zero fatalities forever (at scales where a comparable fleet of human-driven taxis would definitely have some fatalities). In any case, I don’t think the software/neural nets are likely to be the bottleneck.
For Tesla, until recently, they were using surprisingly-low-resolution cameras. So instead of the cognitive task of driving as a human would solve it, they substituted the harder task “driving with a vision impairment and no glasses”. They did upgrade the cameras within the past year, but it’s hard to tell how much of the customer feedback represents the current hardware version vs. past versions; sites like FSDBeta Community Tracker don’t really distinguish. It also seems likely that their onboard GPUs are underpowered relative to the task.
As for Cruise, Comma.ai, and others—well, distance-to-AGI is measured only from the market leader, and just as GPT-4, Claude and Bard have a long tail of inferior models by other orgs trailing behind them, you also expect a long tail of self-driving systems with worse disengagement rates than the leaders.