The more I talk to people about intelligence, the more I realize Eliezer et al’s wisdom in abandoning the term in favour of “optimization process”.
Your intuitive criterion for labelling something as intelligent is not a good thing to be going with. For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
150 years ago, factoring large numbers would have been considered to be the pinnacle of true intelligence.
50 years ago, chess was considered the ultimate test of true intelligence—which is why people made bets that AI would never beat the best human chess players. Perhaps in 50 years time, the ability to suffer from cognitive biases or to have one’s thought biased by emotional factors will be considered the true standard of intelligence, because computers have beaten us at everything else.
We have a moving goalpost problem.
But in any case, the ability of computers to optimize the world is what matters for the activities of SIAI, not some arbitrary, ill-defined, time varying intuitive notion of “true intelligence”—which seems to behave like the end of the rainbow—the more you approach it, the more it moves away.
For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
And the reason for that is simple—the real working definition of “intelligence” in our brains is something like, “that invisible quality our built-in detectors label as ‘mind’ or ‘agency’”. That is, intelligence is an assumed property of things that trip our “agent” detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems “animate”. If we discover that the thing is not “animate”, then our built-in detectors stop considering it an agency… in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would’ve needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat’s workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not “intelligent” any more.
This is one reason why it’s really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting “intelligence” onto its own mechanical processes. (Which is why we have oxymoronic terms like “unconscious mind”, and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to “control” them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of “mind”. To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
(Which may lead to interesting consequences when we eventually fully grasp ourselves.)
This is an important insight. The psychological effects of full self-understanding could be extremely distressing for the human concerned, especially as we tend to reserve moral status to “agents” rather than “machines”. In fact, I suspect that a large component of the depression I have been going through since really grasping the concept of “cognitive bias” is because my mind has started to classify itself as “mechanical” rather than “animate”.
You are wrong. Factoring large numbers has never been considered the pinnacle of true intelligence. Find me a reference if you expect me to believe that circa 1859 something so simple was considered as the pinnacle of anything.
I completely agree about the moving goalposts critique, and I think there is good AI and has been great progress, but when you find yourself defending the idea that a program that factors numbers is a good example of artificial intelligence, alarm bells should start ringing, regardless of whether you are talking about intelligence or optimization.
I think that this is perhaps a bad example, because even today if you ask someone on the street to find the factors of 9,991 there’s no way they’ll do it, and if you show them someone who can do they will say “wow that’s really clever, she must be intelligent”.
So it is still the case that factoring 9991 would be considered by most people to require lots of intelligence. Hell, most people couldn’t factorize 100, never mind 9,991 or 453,443.
You said it was “considered to be the pinnacle of intelligence” 150 years ago, that is, almost 150 years after calculus was invented, and now you’re interpreting that as meaning “a person on the street would think that intelligent.” And you said I was moving goalposts?
It is a bad example, but it’s a bad example because we could explain the algorithm to somebody in about 5 minutes.
I don’t think we disagree. I just think that if chess programs are no more sophisticated now than they were 5 or 10 years ago, then they’re poor examples of intelligence.
The more I talk to people about intelligence, the more I realize Eliezer et al’s wisdom in abandoning the term in favour of “optimization process”.
Your intuitive criterion for labelling something as intelligent is not a good thing to be going with. For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
150 years ago, factoring large numbers would have been considered to be the pinnacle of true intelligence.
50 years ago, chess was considered the ultimate test of true intelligence—which is why people made bets that AI would never beat the best human chess players. Perhaps in 50 years time, the ability to suffer from cognitive biases or to have one’s thought biased by emotional factors will be considered the true standard of intelligence, because computers have beaten us at everything else.
We have a moving goalpost problem.
But in any case, the ability of computers to optimize the world is what matters for the activities of SIAI, not some arbitrary, ill-defined, time varying intuitive notion of “true intelligence”—which seems to behave like the end of the rainbow—the more you approach it, the more it moves away.
And the reason for that is simple—the real working definition of “intelligence” in our brains is something like, “that invisible quality our built-in detectors label as ‘mind’ or ‘agency’”. That is, intelligence is an assumed property of things that trip our “agent” detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems “animate”. If we discover that the thing is not “animate”, then our built-in detectors stop considering it an agency… in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would’ve needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat’s workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not “intelligent” any more.
This is one reason why it’s really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting “intelligence” onto its own mechanical processes. (Which is why we have oxymoronic terms like “unconscious mind”, and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to “control” them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of “mind”. To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
This is an important insight. The psychological effects of full self-understanding could be extremely distressing for the human concerned, especially as we tend to reserve moral status to “agents” rather than “machines”. In fact, I suspect that a large component of the depression I have been going through since really grasping the concept of “cognitive bias” is because my mind has started to classify itself as “mechanical” rather than “animate”.
You are wrong. Factoring large numbers has never been considered the pinnacle of true intelligence. Find me a reference if you expect me to believe that circa 1859 something so simple was considered as the pinnacle of anything.
I completely agree about the moving goalposts critique, and I think there is good AI and has been great progress, but when you find yourself defending the idea that a program that factors numbers is a good example of artificial intelligence, alarm bells should start ringing, regardless of whether you are talking about intelligence or optimization.
I think that this is perhaps a bad example, because even today if you ask someone on the street to find the factors of 9,991 there’s no way they’ll do it, and if you show them someone who can do they will say “wow that’s really clever, she must be intelligent”.
So it is still the case that factoring 9991 would be considered by most people to require lots of intelligence. Hell, most people couldn’t factorize 100, never mind 9,991 or 453,443.
People are stupider than you think.
You said it was “considered to be the pinnacle of intelligence” 150 years ago, that is, almost 150 years after calculus was invented, and now you’re interpreting that as meaning “a person on the street would think that intelligent.” And you said I was moving goalposts?
It is a bad example, but it’s a bad example because we could explain the algorithm to somebody in about 5 minutes.
I don’t think we disagree. I just think that if chess programs are no more sophisticated now than they were 5 or 10 years ago, then they’re poor examples of intelligence.