I wonder if they’re actually using a utility function as in [probability * utility] or just going with [aim for safe car > unsafe car] unilaterally regardless of the likelihood of crashing into either. E.g., treating a 1% chance of crashing into the safe car and a 80% chance of crashing into the unsafe car as equal to 99% chance of crashing into the safe car and a .05% chance of crashing into the unsafe car; choosing in both cases to crash into the safe car.
The article is speculation about moral (and legal) issues of plausibly near-future technology, current self-driving cars are experimental vehicles not designed to safely operate autonomously under emergency situations.
I wonder if they’re actually using a utility function as in [probability * utility] or just going with [aim for safe car > unsafe car] unilaterally regardless of the likelihood of crashing into either. E.g., treating a 1% chance of crashing into the safe car and a 80% chance of crashing into the unsafe car as equal to 99% chance of crashing into the safe car and a .05% chance of crashing into the unsafe car; choosing in both cases to crash into the safe car.
The article is speculation about moral (and legal) issues of plausibly near-future technology, current self-driving cars are experimental vehicles not designed to safely operate autonomously under emergency situations.