It should be more troubling is if it looks like they were grappling with the right issues.
With regards to the Moravec’s paradox, if you can’t see how your AI idea will replicate behaviour of a cat (provided you have a cat), your notion of ‘general intelligence’ is probably thoroughly confused. If your AI needs to do something very superhuman to survive as a cat in the wild, likewise so. The next step could be invention of a stone tied to a stick, from scratch. If you think of AI doing advanced superhuman technology, your standard of understanding is too low.
if you can’t see how your AI idea will replicate behaviour of a cat (provided you have a cat), your notion of ‘general intelligence’ is probably thoroughly confused
If you didn’t know Moravec’s paradox, and ranked the difficulty of cognitive tasks by their perceived human difficulty, then you’d conclude that any AI that could play chess could trivially behave as a cat, once you gave it the required body.
That’s wrong, but there wasn’t evidence for it being wrong, back in 1955.
Moravec’s “paradox” has always been obvious to me, even before I knew it had a name. Now, I did get 25 on the AQ test, and I don’t think that Moravec’s paradox would also be obvious to a more neurotypical person (otherwise it wouldn’t be called a paradox in the first place), but we’re talking about an AI conference, so I would’ve expected that at least some participants would have sensed that.
In terms how of complex natural languages etc. are in an abstract sense? I’d expect that to have been more or less the same for the past few tens of millennia… And the argument that catching a baseball (or something like that) is easy for humans but explicitly writing down and solving the differential equations that govern its motion would be much harder is something that IIRC dates back to the mid-20th century.
EDIT: And BTW...
If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.
-- John von Neumann in 1947. (Actually I was looking for a different quote, but this one will do. EDIT 2: That was “You insist that there is something that a machine can’t do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that.”)
In terms how of complex natural languages etc. are in an abstract sense?
I suspect he means that the knowledge base around Moravec’s paradox has seeped into many sciences and stories and our implicit understanding of the world.
In a sense, all that we had to know what that after 50 years of trying, there were no marching robots but there were decent chess computer programs… to suspect that something non-intuitive was going on here.
It should be more troubling is if it looks like they were grappling with the right issues.
With regards to the Moravec’s paradox, if you can’t see how your AI idea will replicate behaviour of a cat (provided you have a cat), your notion of ‘general intelligence’ is probably thoroughly confused. If your AI needs to do something very superhuman to survive as a cat in the wild, likewise so. The next step could be invention of a stone tied to a stick, from scratch. If you think of AI doing advanced superhuman technology, your standard of understanding is too low.
If you didn’t know Moravec’s paradox, and ranked the difficulty of cognitive tasks by their perceived human difficulty, then you’d conclude that any AI that could play chess could trivially behave as a cat, once you gave it the required body.
That’s wrong, but there wasn’t evidence for it being wrong, back in 1955.
Moravec’s “paradox” has always been obvious to me, even before I knew it had a name. Now, I did get 25 on the AQ test, and I don’t think that Moravec’s paradox would also be obvious to a more neurotypical person (otherwise it wouldn’t be called a paradox in the first place), but we’re talking about an AI conference, so I would’ve expected that at least some participants would have sensed that.
How do you separate this from hindsight bias?
Now I might be mis-remembering things, but...
The world is a different place now. Unless your time frame for “before I knew it had a name” is ~1970?
In terms how of complex natural languages etc. are in an abstract sense? I’d expect that to have been more or less the same for the past few tens of millennia… And the argument that catching a baseball (or something like that) is easy for humans but explicitly writing down and solving the differential equations that govern its motion would be much harder is something that IIRC dates back to the mid-20th century.
EDIT: And BTW...
-- John von Neumann in 1947. (Actually I was looking for a different quote, but this one will do. EDIT 2: That was “You insist that there is something that a machine can’t do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that.”)
I suspect he means that the knowledge base around Moravec’s paradox has seeped into many sciences and stories and our implicit understanding of the world.
In a sense, all that we had to know what that after 50 years of trying, there were no marching robots but there were decent chess computer programs… to suspect that something non-intuitive was going on here.