For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
And the reason for that is simple—the real working definition of “intelligence” in our brains is something like, “that invisible quality our built-in detectors label as ‘mind’ or ‘agency’”. That is, intelligence is an assumed property of things that trip our “agent” detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems “animate”. If we discover that the thing is not “animate”, then our built-in detectors stop considering it an agency… in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would’ve needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat’s workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not “intelligent” any more.
This is one reason why it’s really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting “intelligence” onto its own mechanical processes. (Which is why we have oxymoronic terms like “unconscious mind”, and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to “control” them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of “mind”. To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
(Which may lead to interesting consequences when we eventually fully grasp ourselves.)
This is an important insight. The psychological effects of full self-understanding could be extremely distressing for the human concerned, especially as we tend to reserve moral status to “agents” rather than “machines”. In fact, I suspect that a large component of the depression I have been going through since really grasping the concept of “cognitive bias” is because my mind has started to classify itself as “mechanical” rather than “animate”.
And the reason for that is simple—the real working definition of “intelligence” in our brains is something like, “that invisible quality our built-in detectors label as ‘mind’ or ‘agency’”. That is, intelligence is an assumed property of things that trip our “agent” detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems “animate”. If we discover that the thing is not “animate”, then our built-in detectors stop considering it an agency… in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would’ve needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat’s workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not “intelligent” any more.
This is one reason why it’s really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting “intelligence” onto its own mechanical processes. (Which is why we have oxymoronic terms like “unconscious mind”, and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to “control” them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of “mind”. To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
This is an important insight. The psychological effects of full self-understanding could be extremely distressing for the human concerned, especially as we tend to reserve moral status to “agents” rather than “machines”. In fact, I suspect that a large component of the depression I have been going through since really grasping the concept of “cognitive bias” is because my mind has started to classify itself as “mechanical” rather than “animate”.