Readers of Less Wrong may be interested in this New Scientist article by Noel Sharkey, titled Why AI is a dangerous dream, in which he attacks Kurzweil’s and Moravec’s “fairy tale” predictions and questions whether intelligence is computational (“[the mind] could be a physical system that cannot be recreated by a computer”).
[edit] I thought this would go without saying, but I suspect the downvotes speak otherwise, so: I strongly disagree with the content of this article. I still consider it interesting because it is useful to be aware of differing and potentially popular perspectives on these subjects (and Sharkey is something of a “populist” scientist). I think the opinions it espouses are staggeringly ill-conceived, however.
I strongly disagree. First, on the grounds that LW readers have strong reason to believe this:
(“[the mind] could be a physical system that cannot be recreated by a computer”).
to be false, and so treat it similarly to a proof that 2=1.
But instead of just being a grouch this time, I decided to save you guys the effort and read it myself to see if there’s anything worth reading.
There isn’t. It’s just repetitions of skepticism you’ve already heard, based on Sharkey’s rejection of “the assumption that intelligence is computational” (as opposed to what, and which is different and uncreatable why?), which “It might be, and equally it might not be”.
Other than that, it’s a puff piece interview without much content.
Agreed, I don’t see why the mind isn’t a type of “computer”, and why living organisms aren’t “machines”. If there was something truly different and special about being organic, then we could just build an organic AI. I don’t get the distinction being made.
(Phase 2)
technological artifacts that have no possibility of empathy, compassion or understanding.
Oh: sounds like dualism of some kind if it is impossible for a machine to have empathy, compassion or understanding. Meaning beings with these qualities are more than physical machines, somehow.
(Phase 3)
Reading through some of the comments to the article, it sounds like the objection isn’t that intelligence is necessarily non-physical, but that “computation” doesn’t encompass all possible physical activity. I guess the idea is that if reality is continuous, then there could be some kind of complexity gap between discrete computation and an organic process.
Phases 1-3 is the sequential steps I’ve taken to try to understand this point of view. A view can’t be rejected until its understood...I’m sure people here have considered the AI-is-impossible view before, but I hadn’t.
What is the physical materialist view on whether reality is discrete? (I would guess it’s agnostic.) What is the AI view on whether computations must be discrete? (I would guess AI researchers wouldn’t eschew a continuous computation as as a non-computational thing if it were possible?)
I agree it’s important to apply the principle of charity, but people have to apply the principle of effort too. If Sharkey’s point is about some crucial threshold that continuous systems possess, he should say so. The term “computational” is already taken, so he needs to find another term.
And he can’t be excused on the grounds that “it’s a short interview”, considering that he repeated the same point several times and seemed to find enough space to spell out what (he thinks) his view implies.
“[the mind] could be a physical system that cannot be recreated by a computer”
Let me quote an argument in favor of this, despite the apparently near universal consensus here that it is wrong.
There is a school of thought that says, OK, let’s suppose the mind is a computation, but it is an unsolved problem in philosophy how to determine whether a given physical system implements a given computation. In fact there is even an argument that a clock implements every computation, and it has yet to be conclusively refuted.
If the connection between physical systems and computation is intrinsically uncertain, then we can never say with certainty that two physical systems implement the same computation. In particular, we can never know that a given computer program implements the same computation as a given brain.
Therefore we cannot, in principle, recreate a mind on a computer; at least, not reliably. We can guess that it seems pretty close, but we can never know.
If LessWrongers have solved the problem of determining what counts as instantiating a computation, I’d like to hear more.
If LessWrongers have solved the problem of determining what counts as instantiating a computation, I’d like to hear more.
Sure thing. I solved the problem here and here in response to Paul Almond’s essays on the issue. So did Gary Drescher, who said essentially the same thing in pages 51 through 59 of Good and Real. (I assume you have a copy of it; if not, don’t privately message me and ask me how to pirate it. That’s just wrong, dude. On so many levels.)
I left this comment there:
I thought this might be something like Eliezer’s arguments against developing a GAI until it could be made provably Friendly AI, instead I just got an argument exactly like the ones in 1903 that said heavier than air flight by men was impossible—go back and read some of them, some of the arguments were almost identical. Some of the arguments are currently true, but some of them amount to “I can’t do it, and no one else has done it, therefore there must be some fundamental reason it can’t be done”.
Readers of Less Wrong may be interested in this New Scientist article by Noel Sharkey, titled Why AI is a dangerous dream, in which he attacks Kurzweil’s and Moravec’s “fairy tale” predictions and questions whether intelligence is computational (“[the mind] could be a physical system that cannot be recreated by a computer”).
[edit] I thought this would go without saying, but I suspect the downvotes speak otherwise, so: I strongly disagree with the content of this article. I still consider it interesting because it is useful to be aware of differing and potentially popular perspectives on these subjects (and Sharkey is something of a “populist” scientist). I think the opinions it espouses are staggeringly ill-conceived, however.
I strongly disagree. First, on the grounds that LW readers have strong reason to believe this:
to be false, and so treat it similarly to a proof that 2=1.
But instead of just being a grouch this time, I decided to save you guys the effort and read it myself to see if there’s anything worth reading.
There isn’t. It’s just repetitions of skepticism you’ve already heard, based on Sharkey’s rejection of “the assumption that intelligence is computational” (as opposed to what, and which is different and uncreatable why?), which “It might be, and equally it might not be”.
Other than that, it’s a puff piece interview without much content.
(Phase 1)
Agreed, I don’t see why the mind isn’t a type of “computer”, and why living organisms aren’t “machines”. If there was something truly different and special about being organic, then we could just build an organic AI. I don’t get the distinction being made.
(Phase 2)
Oh: sounds like dualism of some kind if it is impossible for a machine to have empathy, compassion or understanding. Meaning beings with these qualities are more than physical machines, somehow.
(Phase 3)
Reading through some of the comments to the article, it sounds like the objection isn’t that intelligence is necessarily non-physical, but that “computation” doesn’t encompass all possible physical activity. I guess the idea is that if reality is continuous, then there could be some kind of complexity gap between discrete computation and an organic process.
Phases 1-3 is the sequential steps I’ve taken to try to understand this point of view. A view can’t be rejected until its understood...I’m sure people here have considered the AI-is-impossible view before, but I hadn’t.
What is the physical materialist view on whether reality is discrete? (I would guess it’s agnostic.) What is the AI view on whether computations must be discrete? (I would guess AI researchers wouldn’t eschew a continuous computation as as a non-computational thing if it were possible?)
I agree it’s important to apply the principle of charity, but people have to apply the principle of effort too. If Sharkey’s point is about some crucial threshold that continuous systems possess, he should say so. The term “computational” is already taken, so he needs to find another term.
And he can’t be excused on the grounds that “it’s a short interview”, considering that he repeated the same point several times and seemed to find enough space to spell out what (he thinks) his view implies.
“[the mind] could be a physical system that cannot be recreated by a computer”
Let me quote an argument in favor of this, despite the apparently near universal consensus here that it is wrong.
There is a school of thought that says, OK, let’s suppose the mind is a computation, but it is an unsolved problem in philosophy how to determine whether a given physical system implements a given computation. In fact there is even an argument that a clock implements every computation, and it has yet to be conclusively refuted.
If the connection between physical systems and computation is intrinsically uncertain, then we can never say with certainty that two physical systems implement the same computation. In particular, we can never know that a given computer program implements the same computation as a given brain.
Therefore we cannot, in principle, recreate a mind on a computer; at least, not reliably. We can guess that it seems pretty close, but we can never know.
If LessWrongers have solved the problem of determining what counts as instantiating a computation, I’d like to hear more.
Sure thing. I solved the problem here and here in response to Paul Almond’s essays on the issue. So did Gary Drescher, who said essentially the same thing in pages 51 through 59 of Good and Real. (I assume you have a copy of it; if not, don’t privately message me and ask me how to pirate it. That’s just wrong, dude. On so many levels.)
This was linked on Hacker News http://news.ycombinator.com/item?id=797871
I left this comment there: I thought this might be something like Eliezer’s arguments against developing a GAI until it could be made provably Friendly AI, instead I just got an argument exactly like the ones in 1903 that said heavier than air flight by men was impossible—go back and read some of them, some of the arguments were almost identical. Some of the arguments are currently true, but some of them amount to “I can’t do it, and no one else has done it, therefore there must be some fundamental reason it can’t be done”.