I don’t really buy the main argument here, but there were some great sub-arguments in this post. In particular, I found this bit both novel and really interesting:
Even worst, there’s the problem that human-like “AI” will be redundant the moment it’s implemented. Self-driving cars are a real challenge precisely until the point when they become viable enough that everybody uses them, afterwards, every car is running on software and we can replace all the fancy CV-based decision making with simple control structures that rely on very constrained and “sane” behaviour from all other cars. Google assistant being able to call a restaurant or hospital and make a booking for you, or act as the receptionist taking that call, is relevant right until everyone starts using it, afterwards everything will already be digitized and we can switch to better and much simpler booking APIs.
It’s a good point, but it’s like saying that to improve a city you can just bomb it and build it from scratch. In reality improvements need to be incremental and coexist with the legacy system for a while.
I don’t really buy the main argument here, but there were some great sub-arguments in this post. In particular, I found this bit both novel and really interesting:
It’s a good point, but it’s like saying that to improve a city you can just bomb it and build it from scratch. In reality improvements need to be incremental and coexist with the legacy system for a while.