‘Convince a human that he is interacting with a human’ is a low bar. Furthermore, the fully self driving cars are available, just not at an acceptable level of reliability. If we set the bar for reliability as ‘no worse than a texting teenager with a basic license’, it’s probably easily attainable today.
How about we apply performance metrics that would be impossible for a human to achieve to robot drivers and doctors, then move the goalposts every time it looks like they might be hit. This way, we can protect the status quo from disruption while pretending we’re “just being cautious about existential risk”
We passed ‘limited variations of the turing test’ some time ago: https://www.penny-arcade.com/comic/2002/10/04
‘Convince a human that he is interacting with a human’ is a low bar. Furthermore, the fully self driving cars are available, just not at an acceptable level of reliability. If we set the bar for reliability as ‘no worse than a texting teenager with a basic license’, it’s probably easily attainable today.
How about we apply performance metrics that would be impossible for a human to achieve to robot drivers and doctors, then move the goalposts every time it looks like they might be hit. This way, we can protect the status quo from disruption while pretending we’re “just being cautious about existential risk”