There is a general phenomenon in tech that has been expressed many times of people over-estimating the short-term consequences and under-estimating the longer term ones (e.g., “Amara’s law”).
I think that often it is possible to see that current technology is on track to achieve X, where X is widely perceived as the main obstacle for the real-world application Y. But once you solve X, you discover that there is a myriad of other “smaller” problems Z_1 , Z_2 , Z_3 that you need to resolve before you can actually deploy it for Y.
And of course, there is always a huge gap between demonstrating you solved X on some clean academic benchmark, vs. needing to do so “in the wild”. This is particularly an issue in self-driving where errors can be literally deadly but arises in many other applications.
I do think that one lesson we can draw from self-driving is that there is a huge gap between full autonomy and “assistance” with human supervision. So, I would expect we would see AI be deployed as (increasingly sophisticated) “assistants’ way before AI systems actually are able to function as “drop-in” replacements for current human jobs. This is part of the point I was making here.
I know of one: the steam engine was “working” and continuously patented and modified for a century (iirc) before someone used it in boats at scale. https://youtu.be/-8lXXg8dWHk
There is a general phenomenon in tech that has been expressed many times of people over-estimating the short-term consequences and under-estimating the longer term ones (e.g., “Amara’s law”).
I think that often it is possible to see that current technology is on track to achieve X, where X is widely perceived as the main obstacle for the real-world application Y. But once you solve X, you discover that there is a myriad of other “smaller” problems Z_1 , Z_2 , Z_3 that you need to resolve before you can actually deploy it for Y.
And of course, there is always a huge gap between demonstrating you solved X on some clean academic benchmark, vs. needing to do so “in the wild”. This is particularly an issue in self-driving where errors can be literally deadly but arises in many other applications.
I do think that one lesson we can draw from self-driving is that there is a huge gap between full autonomy and “assistance” with human supervision. So, I would expect we would see AI be deployed as (increasingly sophisticated) “assistants’ way before AI systems actually are able to function as “drop-in” replacements for current human jobs. This is part of the point I was making here.
Do you know of any compendiums of such Z_Ns? Would love to read one
I know of one: the steam engine was “working” and continuously patented and modified for a century (iirc) before someone used it in boats at scale. https://youtu.be/-8lXXg8dWHk
See also my post https://www.lesswrong.com/posts/gHB4fNsRY8kAMA9d7/reflections-on-making-the-atomic-bomb
the Manhattan project was all about taking something that’s known to work in theory and solving all the Z_n’s