The observation that things that people used to consider intelligent are now considered easy is critical.
The space of stuff remaining that we call intelligent, but AIs cannot yet do, is shrinking. Every time AI eats something, we realize it wasn’t even that complicated.
The reasonable lesson appears to be: we should stop default-thinking things are hard, and we should start thinking that even stupid approaches might be able to do too much.
It’s a statement more about the problem being solved, not the problem solver.
When you stack this on a familiarity with the techniques in use and how they can be transformatively improved with little effort, that’s when you start sweating.
The observation that things that people used to consider intelligent are now considered easy is critical.
The space of stuff remaining that we call intelligent, but AIs cannot yet do, is shrinking. Every time AI eats something, we realize it wasn’t even that complicated.
The reasonable lesson appears to be: we should stop default-thinking things are hard, and we should start thinking that even stupid approaches might be able to do too much.
It’s a statement more about the problem being solved, not the problem solver.
When you stack this on a familiarity with the techniques in use and how they can be transformatively improved with little effort, that’s when you start sweating.