Are you saying that the main bottleneck is iterative testing? Because we can’t just let a self driving car loos and see what it will do.
Or are you saying the main bottleneck is that self driving cars have much higher robustness requirements in deployment, which means that the problem it self is much harder?
Or both, in which case, which one do you think is more important?
I suspect that testing is one of the more important bottlenecks.
I suspect that some current systems are safe enough if their caution is dialed up to where they’re annoyingly slow 2% of the time, and that leaves them not quite reliable enough at reaching a destination to be competitive.
I don’t think the bottleneck is iterative testing more importantly than in language/image models, all of them need iterative testing to gather the massive amounts of data required to cover all the edge cases you could care about. Your second statement about robustness seems correct to me, I don’t think that self-driving is especially harder or easier than language modelling in some absolute way, it’s just that the bar to deployment is much higher for cars because mistakes cost a lot more. If you wanted language models to be 99.999% robust to weird bugs, that would also be ridiculously hard. But if you want to hear the opinion of people right next to the problem, here’s Andrej Karpathy, the (former) director of self-driving at Tesla, explaining why self-driving is hard.
Are you saying that the main bottleneck is iterative testing? Because we can’t just let a self driving car loos and see what it will do.
Or are you saying the main bottleneck is that self driving cars have much higher robustness requirements in deployment, which means that the problem it self is much harder?
Or both, in which case, which one do you think is more important?
I suspect that testing is one of the more important bottlenecks.
I suspect that some current systems are safe enough if their caution is dialed up to where they’re annoyingly slow 2% of the time, and that leaves them not quite reliable enough at reaching a destination to be competitive.
I don’t think the bottleneck is iterative testing more importantly than in language/image models, all of them need iterative testing to gather the massive amounts of data required to cover all the edge cases you could care about. Your second statement about robustness seems correct to me, I don’t think that self-driving is especially harder or easier than language modelling in some absolute way, it’s just that the bar to deployment is much higher for cars because mistakes cost a lot more. If you wanted language models to be 99.999% robust to weird bugs, that would also be ridiculously hard. But if you want to hear the opinion of people right next to the problem, here’s Andrej Karpathy, the (former) director of self-driving at Tesla, explaining why self-driving is hard.