Why can’t the law just be, if you’re behind the wheel of an autonomous car in possession of immediate over-ride, then you’re exactly as liable as a normal driver?
Now, in practice, if something goes wrong you’re going to be in a terrible position to stop it, because you’re going to not be paying attention. But the law can just be “Well, you have to be paying attention or you’re liable!”—even if that really just amounts to the fact that you’re taking on different risks driving this autonomous car.
The law exploits this illusion of control people have all the time. Why not here?
Of course, this means you can’t just read a book while your car drives itself—at first. As the market penetration increases and the software improves, and people get more comfortable, the laws will be relaxed. Especially because the more driverless cars on the road, the safer they will be.
Why can’t the law just be, if you’re behind the wheel of an autonomous car in possession of immediate over-ride, then you’re exactly as liable as a normal driver?
This assumes that there is someone in the car for starters.
The main application of autonomous cars is replacing truck drivers and delivery services, and maybe taxis. Otherwise you are just making driving slightly more convenient.
Why can’t the law just be, if you’re behind the wheel of an autonomous car in possession of immediate over-ride, then you’re exactly as liable as a normal driver?
That’s the wrong question. You have to ask yourself about what the existing laws do. If one of Google autonomous cars that drives around crashes into another car I would guess that a court would see Google as being responsible in addition to seeing the driver as responsible.
The law exploits this illusion of control people have all the time. Why not here?
That’s no good line of argumentation when you want to convince a politician to pass a law in your favor.
if you’re behind the wheel of an autonomous car in possession of immediate over-ride, then you’re exactly as liable as a normal driver?
Economically it makes sense to have cars that work like Taxi’s where the human driver doesn’t have a way of immediate over-riding the system.
I don’t really understand the legal problem.
Why can’t the law just be, if you’re behind the wheel of an autonomous car in possession of immediate over-ride, then you’re exactly as liable as a normal driver?
Now, in practice, if something goes wrong you’re going to be in a terrible position to stop it, because you’re going to not be paying attention. But the law can just be “Well, you have to be paying attention or you’re liable!”—even if that really just amounts to the fact that you’re taking on different risks driving this autonomous car.
The law exploits this illusion of control people have all the time. Why not here?
Of course, this means you can’t just read a book while your car drives itself—at first. As the market penetration increases and the software improves, and people get more comfortable, the laws will be relaxed. Especially because the more driverless cars on the road, the safer they will be.
This assumes that there is someone in the car for starters.
What’s so wrong with that? Ideally we want to move past it, but for regulation for right now it seems a fine compromise.
The main application of autonomous cars is replacing truck drivers and delivery services, and maybe taxis. Otherwise you are just making driving slightly more convenient.
That’s how Google runs it cars at the moment.
That’s the wrong question. You have to ask yourself about what the existing laws do. If one of Google autonomous cars that drives around crashes into another car I would guess that a court would see Google as being responsible in addition to seeing the driver as responsible.
That’s no good line of argumentation when you want to convince a politician to pass a law in your favor.
Economically it makes sense to have cars that work like Taxi’s where the human driver doesn’t have a way of immediate over-riding the system.