I don’t, incidentally, think that our algorithms are anywhere close to optimal, but I nonetheless felt that the opposing point of view still merits a bit more attention than it has had here so far. They do have a point, even if they’re not 100% correct.
This could actually act as counterevidence against the claim that AI will surpass humans around the time that the processing speed of computers rivals that of the human brain.
It may be that running a non-jury-rigged rational system against the complexity of the real world requires another order of magnitude or more of processing power.
This brings up the likelihood that initial AIs will need to be jury-rigged, and will have their own set of cognitive biases.
I don’t, incidentally, think that our algorithms are anywhere close to optimal, but I nonetheless felt that the opposing point of view still merits a bit more attention than it has had here so far. They do have a point, even if they’re not 100% correct.
This could actually act as counterevidence against the claim that AI will surpass humans around the time that the processing speed of computers rivals that of the human brain.
It may be that running a non-jury-rigged rational system against the complexity of the real world requires another order of magnitude or more of processing power.
This brings up the likelihood that initial AIs will need to be jury-rigged, and will have their own set of cognitive biases.