Exactly. A.I. researchers really need to be familiar with these twopapers, which were also linked in a comment above. Despite the Parberry analysis also linked above, I am still a believer in Strong A.I. But I definitely found it instructive to grapple with computational complexity arguments. It made me reassess certain opinions about resources that I had not thought about before.
I also like the eminent MIT mathematician B.K.P. Horn’s comments, relayed here as part of an abstract to a recent talk he gave:
If a system does learn how to do something, what have we learned?
Often retrospective analysis of “learning” systems yield insights,
but not the expected ones.
Hoping a system will “learn” how to do something can be a way to avoid
doing the hard science necessary to solve the problem properly.
And in the end often not much may be learnt by the researcher,
since typically the learned state of the system is inscrutable.
Garbage in ⇒ garbage out applies to “learning” as much as it does to
computing in general. Extreme efforts to advance the algorithms and
mathematics underlying “learning” may not pay off as well as simple
efforts to provide better inputs by understanding vision better.
I think the problem with the Turing test is that it isn’t a test of intelligence so much as a definition for it. If you deny the Turing test, you deny that intelligence (as defined) is a meaningful concept. I think there might be good reasons to make such a move.
I agree. You cannot decouple the ethical consequences of labeling an agent as “intelligent” from the choice of a definition. If intelligence is pure capacity to perform a certain action, then we might force ourselves to give look-up tables equal rights or make it illegal to unplug a future, slightly souped-up Watson. The Shieber paper alleviates some of this worry by showing how if you take a Turing test as a definition for intelligence, it functions as an interactive proof that the agent really has a general capacity for doing whatever the test requires, i.e. complexity eliminates the case of it doing brute force look-up or something which you may not want to define as intelligent.
But even so, humans judge each other to be conscious and intelligent based on pretty slim evidence (a two-minute conversation, say, or even just a sequence of posts and replies on a blog site like this one). How does anyone here know I’m not just a natural language bot like CleverBot? The short answer is that computational complexity precludes this, and so each new fluid reply I make acts like additional Bayesian evidence that p4wnc6 is not an exponential look-up bot in a table of conversation replies.
Humanity has never had to deal with possibilities like CleverBot until now. That means we’re probably going to re-define what constitutes effective person-to-person interactive proof of intelligence, at least if automatons become good enough at natural language tasks. Imagine a bank’s customer service webpage being run by a variant of CleverBot. Even if it “gets the behavior right” and as far as I can tell it responds and acts like a human using a chat client, I still might want to save judgement and wait until it passes some vastly more complicated Turing test before I make emotional or intuitive decisions about how to treat it or interact with it under the assumption that it is intelligent.
Similarly, if intelligence is just outward behavior, then how much hardware do I have to add to Watson before it becomes immoral to unplug the machine? If that question isn’t a function of hardware, then intelligence isn’t a function solely of behavior but also human concerns for how the behavior algorithmically happens.
This assumes that we’d give rights to computers; they might not be considered morally important even if intelligent, especially if all that means is it being conversant.
I completely agree. I am just saying you can’t divorce the decision to label something as ‘intelligent’ from the ethical discussion of what such a label might imply. If labeling something as ‘intelligent’ has no consequence whatsoever, and is just a purely quantitative measure of the capacity to perform an action, then it’s just a pedantic matter of definitions. The only part of any interest is whether something hangs in the balance over the decision to label or not label something intelligent.
Then the first label of intelligence is merely a definition about capacity to perform a measurable action, and any argument over it is just pedantic hair splitting. Note that I don’t take this view, because in practice, lots of things hinge on whether we consider someone or something else to be intelligent. Consider the Lobster by David Foster Wallace is a great essay about this, though its focus is on cross-species ethical obligations and why we do/do not feel they are important. But clearly the more “sentient” something is, the more ethically we treat it and the more effort and social moral stigma that goes into it. And this is largely judged by tiny little Turing tests going on all day every day.
Exactly. A.I. researchers really need to be familiar with these two papers, which were also linked in a comment above. Despite the Parberry analysis also linked above, I am still a believer in Strong A.I. But I definitely found it instructive to grapple with computational complexity arguments. It made me reassess certain opinions about resources that I had not thought about before.
I also like the eminent MIT mathematician B.K.P. Horn’s comments, relayed here as part of an abstract to a recent talk he gave:
I think the problem with the Turing test is that it isn’t a test of intelligence so much as a definition for it. If you deny the Turing test, you deny that intelligence (as defined) is a meaningful concept. I think there might be good reasons to make such a move.
I agree. You cannot decouple the ethical consequences of labeling an agent as “intelligent” from the choice of a definition. If intelligence is pure capacity to perform a certain action, then we might force ourselves to give look-up tables equal rights or make it illegal to unplug a future, slightly souped-up Watson. The Shieber paper alleviates some of this worry by showing how if you take a Turing test as a definition for intelligence, it functions as an interactive proof that the agent really has a general capacity for doing whatever the test requires, i.e. complexity eliminates the case of it doing brute force look-up or something which you may not want to define as intelligent.
But even so, humans judge each other to be conscious and intelligent based on pretty slim evidence (a two-minute conversation, say, or even just a sequence of posts and replies on a blog site like this one). How does anyone here know I’m not just a natural language bot like CleverBot? The short answer is that computational complexity precludes this, and so each new fluid reply I make acts like additional Bayesian evidence that p4wnc6 is not an exponential look-up bot in a table of conversation replies.
Humanity has never had to deal with possibilities like CleverBot until now. That means we’re probably going to re-define what constitutes effective person-to-person interactive proof of intelligence, at least if automatons become good enough at natural language tasks. Imagine a bank’s customer service webpage being run by a variant of CleverBot. Even if it “gets the behavior right” and as far as I can tell it responds and acts like a human using a chat client, I still might want to save judgement and wait until it passes some vastly more complicated Turing test before I make emotional or intuitive decisions about how to treat it or interact with it under the assumption that it is intelligent.
Similarly, if intelligence is just outward behavior, then how much hardware do I have to add to Watson before it becomes immoral to unplug the machine? If that question isn’t a function of hardware, then intelligence isn’t a function solely of behavior but also human concerns for how the behavior algorithmically happens.
This assumes that we’d give rights to computers; they might not be considered morally important even if intelligent, especially if all that means is it being conversant.
I completely agree. I am just saying you can’t divorce the decision to label something as ‘intelligent’ from the ethical discussion of what such a label might imply. If labeling something as ‘intelligent’ has no consequence whatsoever, and is just a purely quantitative measure of the capacity to perform an action, then it’s just a pedantic matter of definitions. The only part of any interest is whether something hangs in the balance over the decision to label or not label something intelligent.
Sure you can. These are two independent steps, they needn’t be taken together.
Then the first label of intelligence is merely a definition about capacity to perform a measurable action, and any argument over it is just pedantic hair splitting. Note that I don’t take this view, because in practice, lots of things hinge on whether we consider someone or something else to be intelligent. Consider the Lobster by David Foster Wallace is a great essay about this, though its focus is on cross-species ethical obligations and why we do/do not feel they are important. But clearly the more “sentient” something is, the more ethically we treat it and the more effort and social moral stigma that goes into it. And this is largely judged by tiny little Turing tests going on all day every day.
People keep downvoting my comments without giving any sensible replies or criticisms. Poor form.
It might be because of your focus on definitions.
What exactly do you mean by “the ethical consequences of labeling”? It could mean something...unpopular, but it is at least ambiguous.