Plausible, but which certain advances are you thinking of? Do you think what you’re saying is likely? Does that mean next time there are advances, the references will start up again?
I was specifically thinking of the preliminary successes with autonomous vehicles Google has been having, a few high-profile walking robots, and some natural language parsers. Seeing as similar hype clusters have occurred in the past, I would expect them to recur in the future.
I was referring to the hype about them. When something’s new its the subject of all kinds of breathless pronouncements about how it will utterly change the world—then when it enters actual use, people find all the pitfalls and limits that it has in practice that the abstract-concept-of-it does not have, and get disaffected with it. Then it just kind of becomes part of the background, not really noticed.
Some of these advances are also nearing the end of low-hanging fruit, most obviously image recognition. We’re quickly approaching human levels for simple problems, and while there’s a massive amount of space for optimization and better training, these aren’t likely to be newsworthy in the same way.
Plausible, but which certain advances are you thinking of? Do you think what you’re saying is likely? Does that mean next time there are advances, the references will start up again?
I was specifically thinking of the preliminary successes with autonomous vehicles Google has been having, a few high-profile walking robots, and some natural language parsers. Seeing as similar hype clusters have occurred in the past, I would expect them to recur in the future.
Why do you think these advances will “flatten out”?
I was referring to the hype about them. When something’s new its the subject of all kinds of breathless pronouncements about how it will utterly change the world—then when it enters actual use, people find all the pitfalls and limits that it has in practice that the abstract-concept-of-it does not have, and get disaffected with it. Then it just kind of becomes part of the background, not really noticed.
Some of these advances are also nearing the end of low-hanging fruit, most obviously image recognition. We’re quickly approaching human levels for simple problems, and while there’s a massive amount of space for optimization and better training, these aren’t likely to be newsworthy in the same way.
The link still suggest that humans are much better.
I don’t see how better than human level at image recognition won’t provide newsworthy stories.
We are still a long way from security cameras in companies simply identifying every person who walks around via facial recognition.
Question such as whether a school or university is allowed to track attendance rates via facial recognition software will produce social debates.
Evernote does a bit image recognition for documents but aside from that I haven’t used any computer guided image recognition for a while.