I don’t think we disagree too much, but what does “play the right functional role” mean, since my desires are not merely about what brain-state I want to have, but are about the real world? If I have a simple thermostat where a simple bimetallic spring opens or closes a switch, I can’t talk about the real-world approximate goals of the thermostat until I know whether the switch goes to the heater or to the air conditioner. And if I had two such thermostats, I would need the connections to the external world to figure out if they were consistent or inconsistent.
In short, the important functional role that my desires play does not just take place intra-cranially, they function in interaction with my environment. If you were a new superintelligence, and the first thing you found was a wireheaded human, you might conclude that humans value having pleasurable brain states. If the first thing you found were humans in their ancestral environment, you might conclude that they value nutritious foods or producing healthy babies. The brains are basically the same, but the outside world they’re hooked up to is different.
So from the premises of functionalism, we get a sort of holism.
I think the simplest intentional systems just refer to their own sensory states. It’s true that we are able to refer to external things but that’s not by somehow having different external causes of our cognitive states from that of those simple systems. External reference is earned by reasoning in such a way that attributing content like ‘the cause of this and that sensory state …’ is a better explanation of our brain’s dynamics and behavior than just ‘this sensory state’, e.g. reasoning in accordance with the axioms of Pearl’s causal models. This applies to the content of both our beliefs and desires.
In philosophical terms, you seem to be thinking in terms of a causal theory of reference whereas I’m taking a neo-descriptivist approach. Both theories acknowledge that one aspect of meaning is what terms refer to and that obviously depends on the world. But if you consider cases like ‘creature with a heart’ and ‘creature with a kidney’ which may very well refer to the same things but clearly still differ in meaning, you can start to see there’s more to meaning than reference.
Neo-descriptivists would say there’s an intension, which is roughly a function from possible worlds to the term’s reference in that world. It explains how reference is determined and unlike reference does not depend on the external world. This makes it well-suited to explaining cognition and behavior in terms of processes internal to the brain, which might otherwise look like spooky action at a distance if you tried explaining in terms of external reference. In context of my project, I define intension here. See also Chalmers on two-dimensional semantics.
No, I’m definitely being more descriptivist than causal-ist here. The point I want to get at is on a different axis.
Suppose you were Laplace’s demon, and had perfect knowledge of a human’s brain (it’s not strictly necessary to pretend
determinism, but it sure makes the argument simpler). You would have no need to track the human’s “wants” or “beliefs,” you would just predict based on the laws of physics. Not only could you do a better job than some human psychologist on human-scale tasks (like predicting in advance which button the human will press), you would be making information-dense predictions about the microphysical state of the human’s brain the would just be totally beyond a model of humans coarse-grained to the level of psychology rather than physics.
So when you say “External reference is earned by reasoning in such a way that attributing content like ‘the cause of this and that sensory state …’ is a better explanation”, I totally agree, but I want to emphasize: better explanation for whom? If we somehow built Laplace’s demon, what I’d want to tell it is something like “model me according to my own standards for intentionality.”
I don’t think we disagree too much, but what does “play the right functional role” mean, since my desires are not merely about what brain-state I want to have, but are about the real world? If I have a simple thermostat where a simple bimetallic spring opens or closes a switch, I can’t talk about the real-world approximate goals of the thermostat until I know whether the switch goes to the heater or to the air conditioner. And if I had two such thermostats, I would need the connections to the external world to figure out if they were consistent or inconsistent.
In short, the important functional role that my desires play does not just take place intra-cranially, they function in interaction with my environment. If you were a new superintelligence, and the first thing you found was a wireheaded human, you might conclude that humans value having pleasurable brain states. If the first thing you found were humans in their ancestral environment, you might conclude that they value nutritious foods or producing healthy babies. The brains are basically the same, but the outside world they’re hooked up to is different.
So from the premises of functionalism, we get a sort of holism.
I think the simplest intentional systems just refer to their own sensory states. It’s true that we are able to refer to external things but that’s not by somehow having different external causes of our cognitive states from that of those simple systems. External reference is earned by reasoning in such a way that attributing content like ‘the cause of this and that sensory state …’ is a better explanation of our brain’s dynamics and behavior than just ‘this sensory state’, e.g. reasoning in accordance with the axioms of Pearl’s causal models. This applies to the content of both our beliefs and desires.
In philosophical terms, you seem to be thinking in terms of a causal theory of reference whereas I’m taking a neo-descriptivist approach. Both theories acknowledge that one aspect of meaning is what terms refer to and that obviously depends on the world. But if you consider cases like ‘creature with a heart’ and ‘creature with a kidney’ which may very well refer to the same things but clearly still differ in meaning, you can start to see there’s more to meaning than reference.
Neo-descriptivists would say there’s an intension, which is roughly a function from possible worlds to the term’s reference in that world. It explains how reference is determined and unlike reference does not depend on the external world. This makes it well-suited to explaining cognition and behavior in terms of processes internal to the brain, which might otherwise look like spooky action at a distance if you tried explaining in terms of external reference. In context of my project, I define intension here. See also Chalmers on two-dimensional semantics.
No, I’m definitely being more descriptivist than causal-ist here. The point I want to get at is on a different axis.
Suppose you were Laplace’s demon, and had perfect knowledge of a human’s brain (it’s not strictly necessary to pretend determinism, but it sure makes the argument simpler). You would have no need to track the human’s “wants” or “beliefs,” you would just predict based on the laws of physics. Not only could you do a better job than some human psychologist on human-scale tasks (like predicting in advance which button the human will press), you would be making information-dense predictions about the microphysical state of the human’s brain the would just be totally beyond a model of humans coarse-grained to the level of psychology rather than physics.
So when you say “External reference is earned by reasoning in such a way that attributing content like ‘the cause of this and that sensory state …’ is a better explanation”, I totally agree, but I want to emphasize: better explanation for whom? If we somehow built Laplace’s demon, what I’d want to tell it is something like “model me according to my own standards for intentionality.”