You might be thinking of Google’s self driving car which seems like it was designed from the ground up with traditional programming. I am thinking of system’s like Comma.ai’s which use machine learning to train self driving cars, by predicting what a human driver would do.
Of course you can put a regulator on the gas pedal and prevent the AI from speeding. But other issues are more difficult to control. How do you enforce that the Ai should “try to drive with as little risk as possible”? We have very few training examples of accidents, and we can’t let the car experiment under real conditions.
My guess on how to solve this issue is to develop a way to “speak” with the AI. So we can see what it is thinking, and tell it what we would prefer it to do. But this is difficult and there is little research on methods to do this, yet.
Google car also uses machine learning. That still doesn’t mean that it tries to emulate a human driver.
The article doesn’t say that the car predicts what a human driver would do.
How do you enforce that the Ai should “try to drive with as little risk as possible”?
There’s the example of the Google car waiting for the woman in the wheelchair who chased ducks. That’s behavior you get from the way Google algorithm cares about safety that you wouldn’t get from emulating human drivers.
Google uses machine learning, but it’s not based on it. There is a difference between a special “stop sign detector” function, and an “end to end” approach where a single algorithm learns everything.
Comma.ai’s business model is to pay people to upload their dashcam footage, and train neural networks based on it. As far what I described is their approach.
I would be surprised if they setup their system in a way where they can’t tell a car to approach a red light by using less fuel than human drivers use.
As far as accidents go, the idea that automatic breaking should take over in emergency situations is already implemented in many cars on the road. It’s unlikely that the system would react how a human driven car would have reacted a decade ago.
You might be thinking of Google’s self driving car which seems like it was designed from the ground up with traditional programming. I am thinking of system’s like Comma.ai’s which use machine learning to train self driving cars, by predicting what a human driver would do.
Of course you can put a regulator on the gas pedal and prevent the AI from speeding. But other issues are more difficult to control. How do you enforce that the Ai should “try to drive with as little risk as possible”? We have very few training examples of accidents, and we can’t let the car experiment under real conditions.
My guess on how to solve this issue is to develop a way to “speak” with the AI. So we can see what it is thinking, and tell it what we would prefer it to do. But this is difficult and there is little research on methods to do this, yet.
Google car also uses machine learning. That still doesn’t mean that it tries to emulate a human driver. The article doesn’t say that the car predicts what a human driver would do.
There’s the example of the Google car waiting for the woman in the wheelchair who chased ducks. That’s behavior you get from the way Google algorithm cares about safety that you wouldn’t get from emulating human drivers.
Google uses machine learning, but it’s not based on it. There is a difference between a special “stop sign detector” function, and an “end to end” approach where a single algorithm learns everything.
Comma.ai’s business model is to pay people to upload their dashcam footage, and train neural networks based on it. As far what I described is their approach.
I would be surprised if they setup their system in a way where they can’t tell a car to approach a red light by using less fuel than human drivers use.
As far as accidents go, the idea that automatic breaking should take over in emergency situations is already implemented in many cars on the road. It’s unlikely that the system would react how a human driven car would have reacted a decade ago.