I also think that human-level AI is unlikely for a number of reasons – more than a few of them related to the differences between biological and machine intelligence. For one thing, we’re approaching the end of Moore’s Law in about a decade and a half or so, and generalized quantum computing isn’t likely to be with us anytime soon yet. For example, D-Wave’s adiabatic quantum computer isn’t a general computer – it’s focused on optimization problems. But even with that, the differences between human, animal and machine intelligence are profound.
...
At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence.
...
...it’s definitely worth noting that computers don’t play “human-level” chess. Although computers are competitive with grandmasters, they aren’t truly intelligent in a general sense – they are, basically, chess-solving machines. And while they’re superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers.
...computers lose to humans because they are simply no match for humans at creating long-term chess strategy.
...
As for Watson’s ability to play Jeopardy!, it’s important to note that while Watson did win the Jeopardy tournament he played in, it’s worth noting that Watson’s primary advantage was being able to beat humans to the buzzer (electric relays are faster than chemical ones). Moreover, as many who watched the tournament (such as myself) noted, Watson got worse the more abstract the questions got.
...
What computers are smart at are brute-force calculations and memory retrieval. They’re not nearly as good at pattern recognition or the ability to parse meaning and ambiguity, nor are they good at learning.
...
...there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Moore’s Law and the simple fact that, in the realm of science, we’re only just beginning to understand what intelligence, consciousness, and sentience even are, and that’s going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think that’s going to be the case for centuries.
To say that we have to exactly copy a human brain to produce true intelligence, if that is what Knapp and Stross are thinking, is anthropocentric in the extreme. Did we need to copy a bird to produce flight? Did we need to copy a fish to produce a submarine? Did we need to copy a horse to produce a car? No, no, and no. Intelligence is not mystically different.
I think he doesn’t understand what those people are saying. Nobody doubts that you don’t need to imitate human intelligence to get artificial general intelligence but that a useful approximation of AIXI is much harder than understanding human intelligence.
AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself into the Matrix because you showed that in some abstract sense you can simulate every physical process.
I think he doesn’t understand what those people are saying. Nobody doubts that you don’t need to imitate human intelligence to get artificial general intelligence but that a useful approximation of AIXI is much harder than understanding human intelligence.
AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence.
It seems to me that you are systematically underestimating the significance of this material. Solomonoff induction (which AIXI is based on) is of immense theoretical and practical significance.
The post by Stross triggered a reaction by Alex Knapp, ‘What’s the Likelihood of the Singularity? Part One: Artificial Intelligence’, which in turn caused Michael Anissimov to respond, ‘Responding to Alex Knapp at Forbes’.
...
...
...
...
...
Michael Anissimov wrote:
I think he doesn’t understand what those people are saying. Nobody doubts that you don’t need to imitate human intelligence to get artificial general intelligence but that a useful approximation of AIXI is much harder than understanding human intelligence.
AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself into the Matrix because you showed that in some abstract sense you can simulate every physical process.
It seems to me that you are systematically underestimating the significance of this material. Solomonoff induction (which AIXI is based on) is of immense theoretical and practical significance.