I feel like he was falling into a kind of fallacy. He observed that a concept isn’t entirely coherent, rejected the concept.
My go-to writeup on this is Luke Muehlhauser’s Imprecise definitions can still be useful section of his What is Intelligence? MIRI essay written in 2013, which discusses the question of operationalizing the concept of “self-driving car”:
...consider the concept of a “self-driving car,” which has been given a variety of vague definitions since the 1930s. Would a car guided by a buried cable qualify? What about a modified 1955 Studebaker that could use sound waves to detect obstacles and automatically engage the brakes if necessary, but could only steer “on its own” if each turn was preprogrammed? Does that count as a “self-driving car”?
What about the “VaMoRs” of the 1980s that could avoid obstacles and steer around turns using computer vision, but weren’t advanced enough to be ready for public roads? How about the 1995 Navlab car that drove across the USA and was fully autonomous for 98.2% of the trip, or the robotic cars which finished the 132-mile off-road course of the 2005 DARPA Grand Challenge, supplied only with the GPS coordinates of the route? What about the winning cars of the 2007 DARPA Grand Challenge, which finished an urban race while obeying all traffic laws and avoiding collisions with other cars? Does Google’s driverless car qualify, given that it has logged more than 500,000 autonomous miles without a single accident under computer control, but still struggles with difficult merges and snow-covered roads?4
Our lack of a precise definition for “self-driving car” doesn’t seem to have hindered progress on self-driving cars very much.5 And I’m glad we didn’t wait to seriously discuss self-driving cars until we had a precise definition for the term.
Bertrand Russell put it more pithily:
[You cannot] start with anything precise. You have to achieve such precision… as you go along.
I basically agree with this. But if you apply what are described in the post, it’s reveals a lot about why we are not there yet. If you pit a human driver against any of the described autonomous cars, they will just be lots of situations, where the human performs better. And I don’t need to run this experiment, in order to cash out its implications. I think when people talk about fully autonomous cars, then they have implicitly something in mind where the autonomous cars at least as good as human. Thinking about an experiment, that you could run here, makes this implicit assumption explicit. Which is think can be useful. It’s one of the tools that you can use to make you definition more precise along the way.
My go-to writeup on this is Luke Muehlhauser’s Imprecise definitions can still be useful section of his What is Intelligence? MIRI essay written in 2013, which discusses the question of operationalizing the concept of “self-driving car”:
Bertrand Russell put it more pithily:
I basically agree with this. But if you apply what are described in the post, it’s reveals a lot about why we are not there yet. If you pit a human driver against any of the described autonomous cars, they will just be lots of situations, where the human performs better. And I don’t need to run this experiment, in order to cash out its implications. I think when people talk about fully autonomous cars, then they have implicitly something in mind where the autonomous cars at least as good as human. Thinking about an experiment, that you could run here, makes this implicit assumption explicit. Which is think can be useful. It’s one of the tools that you can use to make you definition more precise along the way.