Selection bias. We’re using driving as an example because it turned out that humans are actually good enough at it. Lots of other things that humans aren’t good enough at simply weren’t done before automation and computers.
All this really tells us is that we’re good at some things that weren’t in our ancestral environment. Which we know already (we can do math!).
For things like building AGI, that no human has done, and that for which we don’t yet have a coherent theory or roadmap (other than ‘copy this hugely complicated black box’), we don’t know how easy or difficult they really are. We can get an outside view by comparing with other tasks that we once didn’t know how to do and then succeeded on some and failed on others despite a lot of effort. But I think there’s a lot of variation between cases and prediction is hard.
We’re using driving as an example because it turned out that humans are actually good enough at it.
Plus, our example is specifically driving at the skill level that humans are capable of.
It feels to me like we could drive safely while a little drunk, if we stuck to 20mph and wide roads with shallow turns, and if everyone else did the same. (I haven’t driven in years, and never while drunk, so I might be wrong. Even if I’m right, other people do not do the same. Don’t do this.) If that was the normal difficulty level to drive at, we might say that humans are pretty good at driving even while drunk. But the level we normally drive it is approximately the best we can do, so when we get drunk, we can no longer do it at that level.
If we were used to a world where cars were mostly driven by computers, would we really say humans were good at it? A human compared to a computer could easily be worse than a drunk human compared to a sober human.
Selection bias. We’re using driving as an example because it turned out that humans are actually good enough at it. Lots of other things that humans aren’t good enough at simply weren’t done before automation and computers.
All this really tells us is that we’re good at some things that weren’t in our ancestral environment. Which we know already (we can do math!).
For things like building AGI, that no human has done, and that for which we don’t yet have a coherent theory or roadmap (other than ‘copy this hugely complicated black box’), we don’t know how easy or difficult they really are. We can get an outside view by comparing with other tasks that we once didn’t know how to do and then succeeded on some and failed on others despite a lot of effort. But I think there’s a lot of variation between cases and prediction is hard.
Plus, our example is specifically driving at the skill level that humans are capable of.
It feels to me like we could drive safely while a little drunk, if we stuck to 20mph and wide roads with shallow turns, and if everyone else did the same. (I haven’t driven in years, and never while drunk, so I might be wrong. Even if I’m right, other people do not do the same. Don’t do this.) If that was the normal difficulty level to drive at, we might say that humans are pretty good at driving even while drunk. But the level we normally drive it is approximately the best we can do, so when we get drunk, we can no longer do it at that level.
If we were used to a world where cars were mostly driven by computers, would we really say humans were good at it? A human compared to a computer could easily be worse than a drunk human compared to a sober human.
Or, in Malthusianism form, “Why is driving this dangerous? Because if it wasn’t, people would drive faster and it would be dangerous again.”
(I disagree with some of the Malthusianisms in that post.)