An interesting though somewhat bizarre prediction on the difficulty of building AI by Scott Adams in a recent Periscope session of him (paraphrased from memory):
“The perception that building human intelligence seems so difficult results from a perceptual distortion. Namely that human intelligence is something great when in fact we humans do not possess superior rationality. We only think we do. We just bounce around randomly and try to explain that as something awesome after the fact. Building artificial intelligence then is hard because we try to build something that doesn’t exist. On the other hand building e.g. a robot that moves around arbitrarily based on some complex inner mechanism and generates explanations why it does so would be easy and appear very intelligent.”
The thing is that this is a testable approach and prediction. I want to document it here partly because he claims that he has said that for some years now.
The idea that a robot that moves around randomly and generates explanations would appear intelligent to onlookers might be true, but not very interesting.
The idea that there is nothing more to human intelligence than that is just silly. Besides randomly bouncing around, humans play chess, predict the weather, build bridges, and make long term plans in general. Those are not so easy to reproduce. By the way, as I recall Scott Adams saying himself, it’s best not to take a cartoonist seriously.
An interesting though somewhat bizarre prediction on the difficulty of building AI by Scott Adams in a recent Periscope session of him (paraphrased from memory):
“The perception that building human intelligence seems so difficult results from a perceptual distortion. Namely that human intelligence is something great when in fact we humans do not possess superior rationality. We only think we do. We just bounce around randomly and try to explain that as something awesome after the fact. Building artificial intelligence then is hard because we try to build something that doesn’t exist. On the other hand building e.g. a robot that moves around arbitrarily based on some complex inner mechanism and generates explanations why it does so would be easy and appear very intelligent.”
The thing is that this is a testable approach and prediction. I want to document it here partly because he claims that he has said that for some years now.
In what way is this testable?
The idea that a robot that moves around randomly and generates explanations would appear intelligent to onlookers might be true, but not very interesting.
The idea that there is nothing more to human intelligence than that is just silly. Besides randomly bouncing around, humans play chess, predict the weather, build bridges, and make long term plans in general. Those are not so easy to reproduce. By the way, as I recall Scott Adams saying himself, it’s best not to take a cartoonist seriously.