Last time I read much about computer chess, the better programs were still relying primarily on brute-force search with some minor algorithmic optimizations to prune the search space, together with enormous databases for openings and endgames. Are there actually chess programs nowadays that deserve to be called intelligent?
So what? If you get killed by an uFAI, you cannot appeal to reality and say “but the AI just used a brute-force search method with some minor algorithmic optimizations to prune the search space, together with enormous databases of weapons technology and science”, so can you please unkill me?
The problem domain of chess happens to be one where brute-force search with some clever tricks actually works. Other domains are less like this, such as allowing a robot to walk (Asimo, Big Dog), where researchers are using other, more appropriate techniques such as machine learning.
Your first point—that you can be easily killed or checkmated by a sufficiently powerful program regardless of how it is implemented—is true but irrelevant: the question was not whether the program is powerful and effective (which I would not dispute) but whether it deserves to be called intelligent. You can say that whether it is intelligent or not is unimportant and that what matters is how effective it is, but it is wrong to conflate the two questions and pretend that an answer for one is an answer for the other, unless you are going to make an explicit argument that they are isomorphic or equivalent in some way.
I would argue that a problem domain where brute-force search with simple optimizations actually works extremely well is a problem domain that does not require intelligence. If brute-force search with a few optimizations is intelligent, then a program for factoring numbers is an artificial intelligence.
I don’t have a criterion for intelligence in mind, but like porn, “I know it when I see it”. We might disagree about edge cases, but almost all of us will agree that a number factoring program isn’t “intelligent” in any interesting sense of the term. That’s not to say that it might not be fantastically effective, or that a similarly dumb program with weapons as actuators might not be a formidable foe, but it’s a different question to that of intelligence.
The more I talk to people about intelligence, the more I realize Eliezer et al’s wisdom in abandoning the term in favour of “optimization process”.
Your intuitive criterion for labelling something as intelligent is not a good thing to be going with. For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
150 years ago, factoring large numbers would have been considered to be the pinnacle of true intelligence.
50 years ago, chess was considered the ultimate test of true intelligence—which is why people made bets that AI would never beat the best human chess players. Perhaps in 50 years time, the ability to suffer from cognitive biases or to have one’s thought biased by emotional factors will be considered the true standard of intelligence, because computers have beaten us at everything else.
We have a moving goalpost problem.
But in any case, the ability of computers to optimize the world is what matters for the activities of SIAI, not some arbitrary, ill-defined, time varying intuitive notion of “true intelligence”—which seems to behave like the end of the rainbow—the more you approach it, the more it moves away.
For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
And the reason for that is simple—the real working definition of “intelligence” in our brains is something like, “that invisible quality our built-in detectors label as ‘mind’ or ‘agency’”. That is, intelligence is an assumed property of things that trip our “agent” detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems “animate”. If we discover that the thing is not “animate”, then our built-in detectors stop considering it an agency… in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would’ve needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat’s workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not “intelligent” any more.
This is one reason why it’s really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting “intelligence” onto its own mechanical processes. (Which is why we have oxymoronic terms like “unconscious mind”, and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to “control” them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of “mind”. To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
(Which may lead to interesting consequences when we eventually fully grasp ourselves.)
This is an important insight. The psychological effects of full self-understanding could be extremely distressing for the human concerned, especially as we tend to reserve moral status to “agents” rather than “machines”. In fact, I suspect that a large component of the depression I have been going through since really grasping the concept of “cognitive bias” is because my mind has started to classify itself as “mechanical” rather than “animate”.
You are wrong. Factoring large numbers has never been considered the pinnacle of true intelligence. Find me a reference if you expect me to believe that circa 1859 something so simple was considered as the pinnacle of anything.
I completely agree about the moving goalposts critique, and I think there is good AI and has been great progress, but when you find yourself defending the idea that a program that factors numbers is a good example of artificial intelligence, alarm bells should start ringing, regardless of whether you are talking about intelligence or optimization.
I think that this is perhaps a bad example, because even today if you ask someone on the street to find the factors of 9,991 there’s no way they’ll do it, and if you show them someone who can do they will say “wow that’s really clever, she must be intelligent”.
So it is still the case that factoring 9991 would be considered by most people to require lots of intelligence. Hell, most people couldn’t factorize 100, never mind 9,991 or 453,443.
You said it was “considered to be the pinnacle of intelligence” 150 years ago, that is, almost 150 years after calculus was invented, and now you’re interpreting that as meaning “a person on the street would think that intelligent.” And you said I was moving goalposts?
It is a bad example, but it’s a bad example because we could explain the algorithm to somebody in about 5 minutes.
I don’t think we disagree. I just think that if chess programs are no more sophisticated now than they were 5 or 10 years ago, then they’re poor examples of intelligence.
Last time I read much about computer chess, the better programs were still relying primarily on brute-force search with some minor algorithmic optimizations to prune the search space, together with enormous databases for openings and endgames. Are there actually chess programs nowadays that deserve to be called intelligent?
So what? If you get killed by an uFAI, you cannot appeal to reality and say “but the AI just used a brute-force search method with some minor algorithmic optimizations to prune the search space, together with enormous databases of weapons technology and science”, so can you please unkill me?
The problem domain of chess happens to be one where brute-force search with some clever tricks actually works. Other domains are less like this, such as allowing a robot to walk (Asimo, Big Dog), where researchers are using other, more appropriate techniques such as machine learning.
What is your criterion for
anyway?
Your first point—that you can be easily killed or checkmated by a sufficiently powerful program regardless of how it is implemented—is true but irrelevant: the question was not whether the program is powerful and effective (which I would not dispute) but whether it deserves to be called intelligent. You can say that whether it is intelligent or not is unimportant and that what matters is how effective it is, but it is wrong to conflate the two questions and pretend that an answer for one is an answer for the other, unless you are going to make an explicit argument that they are isomorphic or equivalent in some way.
I would argue that a problem domain where brute-force search with simple optimizations actually works extremely well is a problem domain that does not require intelligence. If brute-force search with a few optimizations is intelligent, then a program for factoring numbers is an artificial intelligence.
I don’t have a criterion for intelligence in mind, but like porn, “I know it when I see it”. We might disagree about edge cases, but almost all of us will agree that a number factoring program isn’t “intelligent” in any interesting sense of the term. That’s not to say that it might not be fantastically effective, or that a similarly dumb program with weapons as actuators might not be a formidable foe, but it’s a different question to that of intelligence.
The more I talk to people about intelligence, the more I realize Eliezer et al’s wisdom in abandoning the term in favour of “optimization process”.
Your intuitive criterion for labelling something as intelligent is not a good thing to be going with. For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
150 years ago, factoring large numbers would have been considered to be the pinnacle of true intelligence.
50 years ago, chess was considered the ultimate test of true intelligence—which is why people made bets that AI would never beat the best human chess players. Perhaps in 50 years time, the ability to suffer from cognitive biases or to have one’s thought biased by emotional factors will be considered the true standard of intelligence, because computers have beaten us at everything else.
We have a moving goalpost problem.
But in any case, the ability of computers to optimize the world is what matters for the activities of SIAI, not some arbitrary, ill-defined, time varying intuitive notion of “true intelligence”—which seems to behave like the end of the rainbow—the more you approach it, the more it moves away.
And the reason for that is simple—the real working definition of “intelligence” in our brains is something like, “that invisible quality our built-in detectors label as ‘mind’ or ‘agency’”. That is, intelligence is an assumed property of things that trip our “agent” detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems “animate”. If we discover that the thing is not “animate”, then our built-in detectors stop considering it an agency… in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would’ve needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat’s workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not “intelligent” any more.
This is one reason why it’s really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting “intelligence” onto its own mechanical processes. (Which is why we have oxymoronic terms like “unconscious mind”, and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to “control” them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of “mind”. To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
This is an important insight. The psychological effects of full self-understanding could be extremely distressing for the human concerned, especially as we tend to reserve moral status to “agents” rather than “machines”. In fact, I suspect that a large component of the depression I have been going through since really grasping the concept of “cognitive bias” is because my mind has started to classify itself as “mechanical” rather than “animate”.
You are wrong. Factoring large numbers has never been considered the pinnacle of true intelligence. Find me a reference if you expect me to believe that circa 1859 something so simple was considered as the pinnacle of anything.
I completely agree about the moving goalposts critique, and I think there is good AI and has been great progress, but when you find yourself defending the idea that a program that factors numbers is a good example of artificial intelligence, alarm bells should start ringing, regardless of whether you are talking about intelligence or optimization.
I think that this is perhaps a bad example, because even today if you ask someone on the street to find the factors of 9,991 there’s no way they’ll do it, and if you show them someone who can do they will say “wow that’s really clever, she must be intelligent”.
So it is still the case that factoring 9991 would be considered by most people to require lots of intelligence. Hell, most people couldn’t factorize 100, never mind 9,991 or 453,443.
People are stupider than you think.
You said it was “considered to be the pinnacle of intelligence” 150 years ago, that is, almost 150 years after calculus was invented, and now you’re interpreting that as meaning “a person on the street would think that intelligent.” And you said I was moving goalposts?
It is a bad example, but it’s a bad example because we could explain the algorithm to somebody in about 5 minutes.
I don’t think we disagree. I just think that if chess programs are no more sophisticated now than they were 5 or 10 years ago, then they’re poor examples of intelligence.