A conversation with my son (18) resulted in a scenario that could be at least a starting point or drive intuition further: He is a fan of Elon Musk and Tesla, and we discussed the capabilities of Full Self Driving (FSD). FSD is already quite good and gets updated remotely. What if this scaled to AGI? What could go wrong? Of course, a lot. But there are good starting points:
Turning off the FSD/AI is part of the routine function of the car. Except for updates and such.
FSD prevents accidents in a way that leaves the driver unharmed. Or course, humans could be eliminated in ways not tied to driving the car or whatever FSD sees as an “accident.”
FSD protects not only the driver but also all other traffic participants. But again, what about non-traffic capability gain?
Networked FSD optimize traffic and, thus, in a meaningful sense, drive in an impact-minimizing way. It could have non-traffic impacts, though.
FSD gets rewarded for getting the drivers where they want. Wireheading the driver will not lead to more driving instructions. The AI could come up with non-human drivers, though.
FSD is closely tied to the experience of the driver—the user experience. That can be extremized but is still a good starting point.
A conversation with my son (18) resulted in a scenario that could be at least a starting point or drive intuition further: He is a fan of Elon Musk and Tesla, and we discussed the capabilities of Full Self Driving (FSD). FSD is already quite good and gets updated remotely. What if this scaled to AGI? What could go wrong? Of course, a lot. But there are good starting points:
Turning off the FSD/AI is part of the routine function of the car. Except for updates and such.
FSD prevents accidents in a way that leaves the driver unharmed. Or course, humans could be eliminated in ways not tied to driving the car or whatever FSD sees as an “accident.”
FSD protects not only the driver but also all other traffic participants. But again, what about non-traffic capability gain?
Networked FSD optimize traffic and, thus, in a meaningful sense, drive in an impact-minimizing way. It could have non-traffic impacts, though.
FSD gets rewarded for getting the drivers where they want. Wireheading the driver will not lead to more driving instructions. The AI could come up with non-human drivers, though.
FSD is closely tied to the experience of the driver—the user experience. That can be extremized but is still a good starting point.