Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.
Self driving cars have very complex goal metrics, along the lines of getting to the destination while disrupting the traffic the least (still grossly oversimplifying).
The manufacturer is interested in every one of his cars getting to the destination in the least time, so the cars are programmed to optimize for the sake of all cars. They’re also interested in getting human drivers to buy their cars, which also makes not driving like a jerk a goal. PD is problematic when agents are selfish, not when agents entirely share the goal. Think of 2 people in PD played for money, who both want to donate all proceeds to same charity. This changes the payoffs to the point where it’s not PD any more.
I dunno, having a self driving jerk car takes away what ever machoism one could have about driving… there’s something about a car where you can go macho and drive manual to be a jerk.
I don’t think it’d help sales at all if self driving cars were causing accidents while themselves evading the collision entirely.
Or different algorithms. How long after wide release will it be before someone modifies their car’s code to drive aggressively, on the assumption that cars running the standard algorithm will move out of the way to avoid an accident?
(I call this “driving like a New Yorker.” New Yorkers will know what I mean.)
That’s like driving without a license. Obviously the driver (software) has to be licensed to drive the car, just as persons are. Software that operates deadly machinery has to be developed in specific ways, certified, and so on and so forth, for how many decades already? (Quite a few)
Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.
Self driving cars have very complex goal metrics, along the lines of getting to the destination while disrupting the traffic the least (still grossly oversimplifying).
The manufacturer is interested in every one of his cars getting to the destination in the least time, so the cars are programmed to optimize for the sake of all cars. They’re also interested in getting human drivers to buy their cars, which also makes not driving like a jerk a goal. PD is problematic when agents are selfish, not when agents entirely share the goal. Think of 2 people in PD played for money, who both want to donate all proceeds to same charity. This changes the payoffs to the point where it’s not PD any more.
I dunno, having a self driving jerk car takes away what ever machoism one could have about driving… there’s something about a car where you can go macho and drive manual to be a jerk.
I don’t think it’d help sales at all if self driving cars were causing accidents while themselves evading the collision entirely.
Already deployed is a better example: computer network protocols.
Or different algorithms. How long after wide release will it be before someone modifies their car’s code to drive aggressively, on the assumption that cars running the standard algorithm will move out of the way to avoid an accident?
(I call this “driving like a New Yorker.” New Yorkers will know what I mean.)
That’s like driving without a license. Obviously the driver (software) has to be licensed to drive the car, just as persons are. Software that operates deadly machinery has to be developed in specific ways, certified, and so on and so forth, for how many decades already? (Quite a few)