Much recent progress in problems traditionally considered to be ‘AI’ problems has come not from dramatic algorithmic breakthroughs or from new insights into the way human brains operate but from throwing lots of processing power at lots of data. It is possible that there are few grand ‘secrets’ to AI beyond this.
The way the human brain has developed suggests to me that human intelligence is not the result of evolution making a series of great algorithmic discoveries on the road to general intelligence but of refinements to certain fairly general purpose computational structures.
The ‘secret’ of human intelligence may be little more than wiring a bunch of sensors and effectors up to a bunch of computational capacity and dropping it in a complex environment. There may be no such thing as an ‘interesting’ AI problem by whatever definition you are using for ‘interesting’.
Elaborate? I’m familiar with Searle’s Chinese Room thought experiment, but I’m not sure what your point is here.
much of what feels like deep reasoning from the inside has been revealed by experiment to be simple pattern recognition and completion.
Much recent progress in problems traditionally considered to be ‘AI’ problems has come not from dramatic algorithmic breakthroughs or from new insights into the way human brains operate but from throwing lots of processing power at lots of data. It is possible that there are few grand ‘secrets’ to AI beyond this.
The way the human brain has developed suggests to me that human intelligence is not the result of evolution making a series of great algorithmic discoveries on the road to general intelligence but of refinements to certain fairly general purpose computational structures.
The ‘secret’ of human intelligence may be little more than wiring a bunch of sensors and effectors up to a bunch of computational capacity and dropping it in a complex environment. There may be no such thing as an ‘interesting’ AI problem by whatever definition you are using for ‘interesting’.