Suppose I hear Bob say “I want to eat an apple.” Am I justified in assigning a higher probability to “Bob wants to eat an apple” after I hear this than before (assuming I don’t have some other evidence to the contrary, like someone is holding a gun to Bob’s head)?
I think this question hits the nail on the head. You are justified in assigning a higher probability to “Bob wants to eat an apple” just in case you are already justified in taking Bob to be a rational agent (other things being equal...). If Bob isn’t at least minimally rational, you can’t even get so far as construing his words as English, let alone to trust that his intent in uttering them is to convey that he wants to eat an apple (think about assessing a wannabe AI chatbot, here). But in taking Bob to be rational, you are already taking him to have preferences and beliefs, and for there to be things which he ought or ought not to do. In other words, you have already crossed beyond what mere natural science provides for. This, anyway, is what I’m trying to argue.
I think I kind of see what you’re getting at. In order to recognize that Bob is rational, I have to have some way of knowing the properties of rationality, and the way we learn such properties does not seem to resemble the methods of the empirical sciences, like physics or chemistry.
But to me it does seems to bear some resemblance to the methods of mathematics. For example in number theory we try to capture some of our intuitions about “natural numbers” in a set of axioms, which then allows us to derive other properties of natural numbers. In the study of rationality we have for example Von Neumann–Morgenstern axioms. Although there is much more disagreement about what an appropriate set of axioms might be where rationality is concerned, the basic methodology still seems similar. Do you agree?
I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour—just that these theories aren’t straight-forwardly continuous with the theories of natural science.
In case you haven’t encountered it and might be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour has been considered in some depth by Donald Davidson, eg in his Essays on Actions and Events
I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour—just that these theories aren’t straight-forwardly continuous with the theories of natural science.
In your OP, you wrote that you found statements like
That X’s brain is in state ABC entails, other things being equal, that X ought to eat an apple.
implausible.
It seems quite possible to me that, perhaps through some method other than the methods of the empirical sciences (for example, through philosophical inquiry), we can determine that among the properties of “want” is that “want to eat an apple” correctly describes the brain state ABC or computational state DEF (or something of that nature). Do you still consider that statement implausible?
Would it be fair to say that your position is that there could be two physically identical brains, and one of them wants to eat an apple but the other doesn’t, or perhaps that one of them is rational but the other isn’t. In other words that preference-zombies or rationality-zombies could exist?
(In case it’s not clear why I’m saying this, this is what accepting
“want to eat an apple” implies being in brain state ABC or computational state DEF (or something of that nature)
while denying
being in brain state ABC or computational state DEF (or something of that nature) implies “want to eat an apple”
I think your question again gets right to the nub of the matter. I have no snappy answer to the challenge -here is my long-winded response.
The zombie analogy is a good one. I understand it’s meant just as an analogy -the intent is not to fall into the qualia quagmire. The thought is that from a purely naturalistic perspective, people can only properly be seen as, as you put it, preference- or rationality-zombies.
The issue here is the validity of identity claims of the form,
Wanting that P = being in brain state ABC
My answer is to compare them to the fate of identity claims relating to sensations (qualia again), such as
Having sensation S (eg, being in pain) = being in brain state DEF
Suppose being in pain is found empirically always to correlate to being in brain state DEF, and the identity is proposed. Qualiaphiles will object, saying that this identity misses what’s crucial to pain, viz, how it feels. The qualiaphile’s thought can be defended by considering the logic of identity claims generally (this adapted from Saul Kripke’s Naming and Necessity).
Scientific identity claims are necessary—if water = H2O in this world, then water = H2O in all possible worlds. That is, because water is a natural kind, whatever it is, it couldn’t have been anything else. It is possible for water to present itself to us in a different phenomenal aspect (‘ice9’!), but this is OK because what’s essential to water is its underlying structure, not its phenomenal properties. The situation is different for pain—what’s essential to pain is its phenomenal properties. Because pain essentially feels like this (so the story goes), it’s correlation with being in brain state DEF can only be contingent. Since identities of this kind, if true, are by their natures necessary, the identity is false.
There is a further step (lots of steps, I admit) to rationality. The thought is that our access to people’s rationality is ‘direct’ in the way our access to pain is. The unmediated judgement of rationality would, if push were to come to shove, trump the scientifically informed, indirect inference from brain states. Defending this proposition would take some doing, but the idea is that we need to understand each other as rational agents before we can get as far as dissecting ourselves to understand ourselves as mere objects.
I think the formal similarities of some aspects of arguments about qualia on the one hand and rationality on the other, are the extent of the similarities. I haven’t followed all the recent discussions on qualia, so I’m not sure where you stand, but personally, I cannot make sense of the concept of qualia. Rationality-involving concepts (among them beliefs and desires), though, are absolutely indispensable. So I don’t think the rationality issue resolves into one about qualia.
I appreciated your first July 07 comment about the details as to how norms can be naturalized and started to respond, then noticed the sound of a broken record. Going round one more time, to me it boils down to what Hume took to be obvious:
What you ought to do is distinct from what you will do.
Natural science can tell you at best what you will do.
Natural science can’t tell you what you ought to do.
It is surprising to me there is so much resistance (I mean, from many people, not just yourself) to this train of thought. When you say in that earlier comment ‘You have a set of goals...’, you have already, in my view, crossed out of natural science. What natural science sees is just what it is your propensity to do, and that is not the same thing as a goal.
I think the formal similarities of some aspects of arguments about qualia on the one hand and rationality on the other, are the extent of the similarities. I haven’t followed all the recent discussions on qualia, so I’m not sure where you stand, but personally, I cannot make sense of the concept of qualia. Rationality-involving concepts (among them beliefs and desires), though, are absolutely indispensable. So I don’t think the rationality issue resolves into one about qualia.
Rationality uncontroversially involves rules and goals, both of which are naturalisable. You have said there is an extra ingredient of “caring”, which sound qualia-like.
What you ought to do is distinct from what you will do.
Not in all cases surely? What would an is/ought gap be when behaviour matched the ideal
Natural science can tell you at best what you will do.
Natural science can’t tell you what you ought to do.
That depends on what you mean by ‘can’. All the information about the intentions and consequences
of your actions is encoded in a total physical picture of the universe. Where else would it be? OTOH,
natural science, in practice,cannot produce that answer.
It is surprising to me there is so much resistance (I mean, from many people, not just yourself) to this train of thought. When you say in that earlier comment ‘You have a set of goals...’, you have already, in my view, crossed out of natural science. What natural science sees is just what it is your propensity to do, and that is not the same thing as a goal.
Natural science is not limited to behaviour: it can peak inside a black box and see that a certain goal
is encoded into it.even it it is not being achieved.
There are underdetermination problems all over the philosophy of science. I don’t see how this poses a special problem for norms, or rationality. When two domains of science are integrated, it is often via proposed bridge laws that may not provide an exactly intuitive match. For example, some of the gases that have a high “temperature” when that is defined as mean kinetic energy, might feel somewhat colder than some others with lower “temperature”. But we accept the reduction provided it succeeds well enough.
If there are no perfect conceptual matches by definitions of a to-be-reduced term in the terms of the reducing domain, that is not fatal. If we can’t find one now, that is even less so.
I agree that underdetermination problems are distinct from problems about norms -from the is/ought problem. Apologies if I introduced confusion in mentioning them. They are relevant because they arise (roughly speaking) at the interface between decision theory and empirical science, ie, where you try to map mere behaviours onto desires and beliefs.
My understanding is that in philosophy of science, an underdetermination problem arises when all evidence is consistent with more than one theory or explanation. You have a scientist, a set of facts, and more than one theory which the scientist can fit to the facts. In answer to your initial challenge, the problem is different for human psychology because the underdetermination is not of the scientist’s theory but supposedly of one set of facts (facts about beliefs and desires) by another (behaviour and all cognitive states of the agent). That is, in contrast to the basic case, here you have a scientist, one set of facts -about a person’s behaviour and cognitive states- a second set of suppposed facts -about the person’s beliefs and desires- and the problem is that the former set underdetermine the latter.
You seem to be introducing a fact/theory dichotomy. That doesn’t seem promising.
If we look at successful reductions in the sciences, they can make at least some of our underdetermination problems disappear. Mean kinetic energy of a gas is a more precise notion than “temperature” was in prior centuries, for example. I wouldn’t try to wrestle with the concepts of cognitive states and behaviors to resolve underdetermination. Instead, it seems worthwhile to propose candidate bridge laws and see where they get us. I think that Millikan et al may well be onto something.
As I understand it, the problem of scientific underdetermination can only be formulated if we make some kind of fact/theory distinction—observation/theory would be better, is that ok with you?
I’m not actually seeing how the temperature example is an instance of underdetermination, and I’m a little fuzzy on where bridge laws fit in, but would be open to clarification on these things.
Well, scientific underdetermination problems can be formulated with a context-relative observation/theory distinction. But this is compatible with seeing observations as potentially open to contention between different theories (and in that sense “theory-laden”). The question is, are these distinctions robust enough to support your argument?
By the way, I’m confused by your use of “cognitive states” in your 08 July comment above, where it is contrasted to beliefs and desires. Did you mean neural states?
Temperature was underdetermined in the early stages of science because the nebulae of associated phenomena had not been sorted out. Sometimes different methods of assessing temperature could conflict. E.g., object A might be hotter to the touch than B, yet when both are placed in contact with C, B warms C and A cools it.
I’m confused by your use of “cognitive states” in your 08 July comment above
You are quite right -sorry about the confusion. I meant to say behaviour and computational states -the thought being that we are trying to correlate having a belief or desire to being in some combination of these.
The question is, are these distinctions robust enough to support your argument?
I understand you’re referring here to the claim -for which I can’t take credit- that facts about behaviour underdetermine facts about beliefs and desires. Because the issue -or so I want to argue- is of underdetermination of one set of potential facts (namely, about beliefs and desires) by another (uninterpreted behaviour), rather than of underdetermination of scientific theory by fact or observation, I’m not seeing that the issue of the theory-ladenness of observation ultimately presents a problem.
The underdetermination is pretty easy to show, at least on a superficial level. Suppose you observe
a person, X, pluck an apple from a tree and eat it (facts about behaviour).
You infer:
X desires that s/he eat an apple, and X believes that if s/he plucks and bites this fruit, s/he will eat an apple.
But couldn’t one also infer,
X desires that s/he eat a pear, and X believes (mistakenly) that if s/he plucks and bites this fruit, s/he will eat a pear.
or
X desires that s/he be healthy, and X believes that if s/he plucks and bites this fruit (whatever the heck it is), s/he will be healthy.
You may think that if you observe enough behaviour, you can constrain these possibilities. There are arguments (which I acknowledge I have not given), which show (or so a lot of people think) that this is not the case—the underdetermination keeps pace.
Because the issue [...] is of underdetermination of one set of potential facts (namely, about beliefs and desires) by another (uninterpreted behaviour), rather than of underdetermination of scientific theory by fact or observation, I’m not seeing that the issue of the theory-ladenness of observation ultimately presents a problem.
Emphasis added. The issue you’re pointing to still just looks like a particular case of underdetermination of the more-theoretical by the more-observational (and the associated “problem” of theory-laden observations). Nothing new under the sun here. Just the same old themes, with minor variations, that apply all over science. Thus, no reason to single out psychology for exclusion from naturalistic study.
One observer looks at Xanadu and sees that she wanted an apple, and that she was satisfied. Another looks at her and sees only that she plucked an apple, and infers that she wanted it. Another looks and sees a brown patch here, and a red patch there, and infers that these belonged to a human and an apple respectively… Compare: one scientist looks at the bubble chamber and sees two electrons fly through it. Another sees two bubble tracks … etc.
As I tried to explain in my July 08 post, there is a difference.
Straight-forward scientific underdetermination:
One observer/scientist
One, unproblematic set of facts (a curved white streak on a film plate exposed in a bubble chamber)
Any number of mutually incompatible scientific theories, each of which adequately explains this and all other facts. All theories completely adequate to all observations. The only puzzle is that there can be more than one theory. (Tempting to imagine two of these might be, say, established particle theory, and Wolfram’s New Kind of Science conception of nature. Though presumably they would ultimately make divergent predictions, meaning this is a misleading thought).
Underdetermination of psychological facts by naturalistic facts:
One observer/scientist
One, unproblematic set of facts (behaviour and brain states. eg, a person picking an apple, and all associated neurlogical events)
Any number of problematic sets of supposed facts (complete but mutually incompatible assignments of beliefs and desires to the person consistent with her behaviour and brain states)
No (naturalistic) theory which justifies choosing one of the latter sets of facts -that is, justifies an assignment of beliefs and desires to the person.
The latter problem is not just an instance of the former. The problem for physics comparable to psychological underdetermination might look like this (ignoring Reality for the moment to make the point):
Scientist observes trace on film plate from cloud chamber experiment.
Scientist’s theory is consistent with two different possible explanations (in one explanation it’s an electron, in another it’s a muon).
No further facts can nail down which explanation is correct, and all facts can anyway be explained, if more pedantically, without appeal to either electrons or muons. That is, both explanations can be reconciled equally well with all possible facts, and neither explanation anyway is ultimately needed. The suggestion is that the posits in question -electrons and muons (read different beliefs)- would be otiose for physics (read naturalistic psych).
The differences you’ve identified amount to (A) both explanations can be reconciled equally well with all possible facts, and (B) all facts can anyway be explained without the theoretical posits. But (B) doesn’t seem in-principle different from any other scientific theoretical apparatus. Simply operationalize it thoroughly and say “shut up and calculate!”
So that leaves (A). I’ll admit that this makes a big difference, but it also seems a very tall order. The idea that any given hypothesized set of beliefs and desires is compatible with all possible facts, is not very plausible on its face. Please provide links to the aforementioned arguments to that effect, in the literature.
The idea that any given hypothesized set of beliefs and desires is compatible with all possible facts, is not very plausible on its face.
I didn’t mean to say this, if I did. The thesis is that there are indefinitely many sets of beliefs and desires compatible with all possible behavioural and other physical facts. And I do admit it seems a tall order. But then again, so does straight-forward scientific underdetermination, it seems to me. Just to be clear, my personal preoccupation is the prescriptive or normative nature of oughts and hence wants and beliefs, which I think is a different problem than the underdetermination problem.
The canonical statement comes in Chapter 2 of W.V.O. Quine’s Word and Object. Quine focusses on linguistic behaviour, and on the conclusion that there is no unique correct translation manual for interpreting one person’s utterances in the idiolect of another (even if they both speak, say, English). The claims about beliefs are a corrollary. Donald Davidson takes up these ideas and relates them specifically to agents’ beliefs in a number of places, notably his papers ‘Radical Interpretation’, ‘Belief and the Basis of Meaning’, and ‘Thought and Talk’, all reprinted in his Inquiries into Truth and Interpretation. Hilary Putnam, in his paper ‘Models and Reality’ (reprinted in his Realism and Reason ), tried to give heft to what (I understand) comes down to Quine’s idea by arguing it to be a consequence of the Lowenheim-Skolem theorem of mathematical logic.
Timothy Bays has a reply to Putnam’s alleged proof sufficient to render the latter indecisive, as far as I can see. The set theory is a challenge for me, though.
As for Quine, on the one hand I think he underestimates the kinds of evidence that can bear, and he understates the force of simplicity considerations (“undetached rabbit-parts” could only be loved by a philosopher). But on the other hand, and perhaps more important, he seems right to downplay any remaining “difference” of alternative translations. It’s not clear that the choice between workable alternatives is a problem.
Thanks for the link to the paper by Timothy Bays. It looks like a worthwhile -if rather challenging- read.
I have to acknowledge there’s lots to be said in response to Quine and Putnam. I could try to take on the task of defending them, but I suspect your ability to come up with objections would well outpace my ability to come up with responses. People get fed up with philosophers’ extravagant thought experiments, I know. I guess Quine’s implicit challenge with his “undetached rabbit parts” and so on is to come up with a clear (and, of course, naturalistic) criterion which would show the translation to be wrong. Simplicity considerations, as you suggest, may do it, but I’m not so sure.
I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour—just that these theories aren’t straight-forwardly continuous with the theories of natural science.
I appreciate this clarification. The point is indeed meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour—just that these theories aren’t straight-forwardly continuous with the theories of natural science.
In case you haven’t encountered it and may be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour (that is, the interface between natural science and the study of rationality) has been considered in some depth by a number of people including notably Donald Davidson, eg in his (Essays on Actions and Events)[http://books.google.com/books/about/Essays_on_actions_and_events.html?id=Bj2HHI0c2RIC].
I think this question hits the nail on the head. You are justified in assigning a higher probability to “Bob wants to eat an apple” just in case you are already justified in taking Bob to be a rational agent (other things being equal...). If Bob isn’t at least minimally rational, you can’t even get so far as construing his words as English, let alone to trust that his intent in uttering them is to convey that he wants to eat an apple (think about assessing a wannabe AI chatbot, here). But in taking Bob to be rational, you are already taking him to have preferences and beliefs, and for there to be things which he ought or ought not to do. In other words, you have already crossed beyond what mere natural science provides for. This, anyway, is what I’m trying to argue.
I think I kind of see what you’re getting at. In order to recognize that Bob is rational, I have to have some way of knowing the properties of rationality, and the way we learn such properties does not seem to resemble the methods of the empirical sciences, like physics or chemistry.
But to me it does seems to bear some resemblance to the methods of mathematics. For example in number theory we try to capture some of our intuitions about “natural numbers” in a set of axioms, which then allows us to derive other properties of natural numbers. In the study of rationality we have for example Von Neumann–Morgenstern axioms. Although there is much more disagreement about what an appropriate set of axioms might be where rationality is concerned, the basic methodology still seems similar. Do you agree?
I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour—just that these theories aren’t straight-forwardly continuous with the theories of natural science.
In case you haven’t encountered it and might be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour has been considered in some depth by Donald Davidson, eg in his Essays on Actions and Events
In your OP, you wrote that you found statements like
implausible.
It seems quite possible to me that, perhaps through some method other than the methods of the empirical sciences (for example, through philosophical inquiry), we can determine that among the properties of “want” is that “want to eat an apple” correctly describes the brain state ABC or computational state DEF (or something of that nature). Do you still consider that statement implausible?
This seems reasonable, but I have to ask about “correctly describes”. The statement
“want to eat an apple” implies being in brain state ABC or computational state DEF (or something of that nature)
is plausible to me. I think the reverse implication, though raises a problem:
being in brain state ABC or computational state DEF (or something of that nature) implies “want to eat an apple”
But maybe neither of these is what you have in mind?
I think I mean the latter. What problem do you see with it?
I do accept that ‘wants’ imply ‘oughts’. It’s an oversimplification, but the thought is that statements such as
X’s wanting that X eat an apple implies (many other things being equal) that X ought to eat an apple.
are intuitively plausible. If wanting carries no implications for what one ought to do, I don’t see how motivation can get off the ground.
Now, if we have
1) wanting that P implies one ought to do Q,
and
2) being in physical state ABC implies wanting that P
then, by transitivity of implication, we get
3) being in physical state ABC implies one ought to do Q
And this is just the kind of implication I’m trying to show is problematic.
Would it be fair to say that your position is that there could be two physically identical brains, and one of them wants to eat an apple but the other doesn’t, or perhaps that one of them is rational but the other isn’t. In other words that preference-zombies or rationality-zombies could exist?
(In case it’s not clear why I’m saying this, this is what accepting
while denying
would imply.)
I think your question again gets right to the nub of the matter. I have no snappy answer to the challenge -here is my long-winded response.
The zombie analogy is a good one. I understand it’s meant just as an analogy -the intent is not to fall into the qualia quagmire. The thought is that from a purely naturalistic perspective, people can only properly be seen as, as you put it, preference- or rationality-zombies.
The issue here is the validity of identity claims of the form,
Wanting that P = being in brain state ABC
My answer is to compare them to the fate of identity claims relating to sensations (qualia again), such as
Having sensation S (eg, being in pain) = being in brain state DEF
Suppose being in pain is found empirically always to correlate to being in brain state DEF, and the identity is proposed. Qualiaphiles will object, saying that this identity misses what’s crucial to pain, viz, how it feels. The qualiaphile’s thought can be defended by considering the logic of identity claims generally (this adapted from Saul Kripke’s Naming and Necessity).
Scientific identity claims are necessary—if water = H2O in this world, then water = H2O in all possible worlds. That is, because water is a natural kind, whatever it is, it couldn’t have been anything else. It is possible for water to present itself to us in a different phenomenal aspect (‘ice9’!), but this is OK because what’s essential to water is its underlying structure, not its phenomenal properties. The situation is different for pain—what’s essential to pain is its phenomenal properties. Because pain essentially feels like this (so the story goes), it’s correlation with being in brain state DEF can only be contingent. Since identities of this kind, if true, are by their natures necessary, the identity is false.
There is a further step (lots of steps, I admit) to rationality. The thought is that our access to people’s rationality is ‘direct’ in the way our access to pain is. The unmediated judgement of rationality would, if push were to come to shove, trump the scientifically informed, indirect inference from brain states. Defending this proposition would take some doing, but the idea is that we need to understand each other as rational agents before we can get as far as dissecting ourselves to understand ourselves as mere objects.
It is still not clear whether you think rationality is analogous to qualia or is a quale.
I think the formal similarities of some aspects of arguments about qualia on the one hand and rationality on the other, are the extent of the similarities. I haven’t followed all the recent discussions on qualia, so I’m not sure where you stand, but personally, I cannot make sense of the concept of qualia. Rationality-involving concepts (among them beliefs and desires), though, are absolutely indispensable. So I don’t think the rationality issue resolves into one about qualia.
I appreciated your first July 07 comment about the details as to how norms can be naturalized and started to respond, then noticed the sound of a broken record. Going round one more time, to me it boils down to what Hume took to be obvious:
What you ought to do is distinct from what you will do.
Natural science can tell you at best what you will do.
Natural science can’t tell you what you ought to do.
It is surprising to me there is so much resistance (I mean, from many people, not just yourself) to this train of thought. When you say in that earlier comment ‘You have a set of goals...’, you have already, in my view, crossed out of natural science. What natural science sees is just what it is your propensity to do, and that is not the same thing as a goal.
Rationality uncontroversially involves rules and goals, both of which are naturalisable. You have said there is an extra ingredient of “caring”, which sound qualia-like.
Not in all cases surely? What would an is/ought gap be when behaviour matched the ideal
That depends on what you mean by ‘can’. All the information about the intentions and consequences of your actions is encoded in a total physical picture of the universe. Where else would it be? OTOH, natural science, in practice,cannot produce that answer.
Natural science is not limited to behaviour: it can peak inside a black box and see that a certain goal is encoded into it.even it it is not being achieved.
I don’t see the problem with the latter either.
There are underdetermination problems all over the philosophy of science. I don’t see how this poses a special problem for norms, or rationality. When two domains of science are integrated, it is often via proposed bridge laws that may not provide an exactly intuitive match. For example, some of the gases that have a high “temperature” when that is defined as mean kinetic energy, might feel somewhat colder than some others with lower “temperature”. But we accept the reduction provided it succeeds well enough.
If there are no perfect conceptual matches by definitions of a to-be-reduced term in the terms of the reducing domain, that is not fatal. If we can’t find one now, that is even less so.
I agree that underdetermination problems are distinct from problems about norms -from the is/ought problem. Apologies if I introduced confusion in mentioning them. They are relevant because they arise (roughly speaking) at the interface between decision theory and empirical science, ie, where you try to map mere behaviours onto desires and beliefs.
My understanding is that in philosophy of science, an underdetermination problem arises when all evidence is consistent with more than one theory or explanation. You have a scientist, a set of facts, and more than one theory which the scientist can fit to the facts. In answer to your initial challenge, the problem is different for human psychology because the underdetermination is not of the scientist’s theory but supposedly of one set of facts (facts about beliefs and desires) by another (behaviour and all cognitive states of the agent). That is, in contrast to the basic case, here you have a scientist, one set of facts -about a person’s behaviour and cognitive states- a second set of suppposed facts -about the person’s beliefs and desires- and the problem is that the former set underdetermine the latter.
You seem to be introducing a fact/theory dichotomy. That doesn’t seem promising.
If we look at successful reductions in the sciences, they can make at least some of our underdetermination problems disappear. Mean kinetic energy of a gas is a more precise notion than “temperature” was in prior centuries, for example. I wouldn’t try to wrestle with the concepts of cognitive states and behaviors to resolve underdetermination. Instead, it seems worthwhile to propose candidate bridge laws and see where they get us. I think that Millikan et al may well be onto something.
As I understand it, the problem of scientific underdetermination can only be formulated if we make some kind of fact/theory distinction—observation/theory would be better, is that ok with you?
I’m not actually seeing how the temperature example is an instance of underdetermination, and I’m a little fuzzy on where bridge laws fit in, but would be open to clarification on these things.
Well, scientific underdetermination problems can be formulated with a context-relative observation/theory distinction. But this is compatible with seeing observations as potentially open to contention between different theories (and in that sense “theory-laden”). The question is, are these distinctions robust enough to support your argument?
By the way, I’m confused by your use of “cognitive states” in your 08 July comment above, where it is contrasted to beliefs and desires. Did you mean neural states?
Temperature was underdetermined in the early stages of science because the nebulae of associated phenomena had not been sorted out. Sometimes different methods of assessing temperature could conflict. E.g., object A might be hotter to the touch than B, yet when both are placed in contact with C, B warms C and A cools it.
You are quite right -sorry about the confusion. I meant to say behaviour and computational states -the thought being that we are trying to correlate having a belief or desire to being in some combination of these.
I understand you’re referring here to the claim -for which I can’t take credit- that facts about behaviour underdetermine facts about beliefs and desires. Because the issue -or so I want to argue- is of underdetermination of one set of potential facts (namely, about beliefs and desires) by another (uninterpreted behaviour), rather than of underdetermination of scientific theory by fact or observation, I’m not seeing that the issue of the theory-ladenness of observation ultimately presents a problem.
The underdetermination is pretty easy to show, at least on a superficial level. Suppose you observe
a person, X, pluck an apple from a tree and eat it (facts about behaviour).
You infer:
X desires that s/he eat an apple, and X believes that if s/he plucks and bites this fruit, s/he will eat an apple.
But couldn’t one also infer,
X desires that s/he eat a pear, and X believes (mistakenly) that if s/he plucks and bites this fruit, s/he will eat a pear.
or
X desires that s/he be healthy, and X believes that if s/he plucks and bites this fruit (whatever the heck it is), s/he will be healthy.
You may think that if you observe enough behaviour, you can constrain these possibilities. There are arguments (which I acknowledge I have not given), which show (or so a lot of people think) that this is not the case—the underdetermination keeps pace.
Emphasis added. The issue you’re pointing to still just looks like a particular case of underdetermination of the more-theoretical by the more-observational (and the associated “problem” of theory-laden observations). Nothing new under the sun here. Just the same old themes, with minor variations, that apply all over science. Thus, no reason to single out psychology for exclusion from naturalistic study.
One observer looks at Xanadu and sees that she wanted an apple, and that she was satisfied. Another looks at her and sees only that she plucked an apple, and infers that she wanted it. Another looks and sees a brown patch here, and a red patch there, and infers that these belonged to a human and an apple respectively… Compare: one scientist looks at the bubble chamber and sees two electrons fly through it. Another sees two bubble tracks … etc.
As I tried to explain in my July 08 post, there is a difference.
Straight-forward scientific underdetermination:
One observer/scientist
One, unproblematic set of facts (a curved white streak on a film plate exposed in a bubble chamber)
Any number of mutually incompatible scientific theories, each of which adequately explains this and all other facts. All theories completely adequate to all observations. The only puzzle is that there can be more than one theory. (Tempting to imagine two of these might be, say, established particle theory, and Wolfram’s New Kind of Science conception of nature. Though presumably they would ultimately make divergent predictions, meaning this is a misleading thought).
Underdetermination of psychological facts by naturalistic facts:
One observer/scientist
One, unproblematic set of facts (behaviour and brain states. eg, a person picking an apple, and all associated neurlogical events)
Any number of problematic sets of supposed facts (complete but mutually incompatible assignments of beliefs and desires to the person consistent with her behaviour and brain states)
No (naturalistic) theory which justifies choosing one of the latter sets of facts -that is, justifies an assignment of beliefs and desires to the person.
The latter problem is not just an instance of the former. The problem for physics comparable to psychological underdetermination might look like this (ignoring Reality for the moment to make the point):
Scientist observes trace on film plate from cloud chamber experiment.
Scientist’s theory is consistent with two different possible explanations (in one explanation it’s an electron, in another it’s a muon).
No further facts can nail down which explanation is correct, and all facts can anyway be explained, if more pedantically, without appeal to either electrons or muons. That is, both explanations can be reconciled equally well with all possible facts, and neither explanation anyway is ultimately needed. The suggestion is that the posits in question -electrons and muons (read different beliefs)- would be otiose for physics (read naturalistic psych).
The differences you’ve identified amount to (A) both explanations can be reconciled equally well with all possible facts, and (B) all facts can anyway be explained without the theoretical posits. But (B) doesn’t seem in-principle different from any other scientific theoretical apparatus. Simply operationalize it thoroughly and say “shut up and calculate!”
So that leaves (A). I’ll admit that this makes a big difference, but it also seems a very tall order. The idea that any given hypothesized set of beliefs and desires is compatible with all possible facts, is not very plausible on its face. Please provide links to the aforementioned arguments to that effect, in the literature.
I didn’t mean to say this, if I did. The thesis is that there are indefinitely many sets of beliefs and desires compatible with all possible behavioural and other physical facts. And I do admit it seems a tall order. But then again, so does straight-forward scientific underdetermination, it seems to me. Just to be clear, my personal preoccupation is the prescriptive or normative nature of oughts and hence wants and beliefs, which I think is a different problem than the underdetermination problem.
The canonical statement comes in Chapter 2 of W.V.O. Quine’s Word and Object. Quine focusses on linguistic behaviour, and on the conclusion that there is no unique correct translation manual for interpreting one person’s utterances in the idiolect of another (even if they both speak, say, English). The claims about beliefs are a corrollary. Donald Davidson takes up these ideas and relates them specifically to agents’ beliefs in a number of places, notably his papers ‘Radical Interpretation’, ‘Belief and the Basis of Meaning’, and ‘Thought and Talk’, all reprinted in his Inquiries into Truth and Interpretation. Hilary Putnam, in his paper ‘Models and Reality’ (reprinted in his Realism and Reason ), tried to give heft to what (I understand) comes down to Quine’s idea by arguing it to be a consequence of the Lowenheim-Skolem theorem of mathematical logic.
Timothy Bays has a reply to Putnam’s alleged proof sufficient to render the latter indecisive, as far as I can see. The set theory is a challenge for me, though.
As for Quine, on the one hand I think he underestimates the kinds of evidence that can bear, and he understates the force of simplicity considerations (“undetached rabbit-parts” could only be loved by a philosopher). But on the other hand, and perhaps more important, he seems right to downplay any remaining “difference” of alternative translations. It’s not clear that the choice between workable alternatives is a problem.
Thanks for the link to the paper by Timothy Bays. It looks like a worthwhile -if rather challenging- read.
I have to acknowledge there’s lots to be said in response to Quine and Putnam. I could try to take on the task of defending them, but I suspect your ability to come up with objections would well outpace my ability to come up with responses. People get fed up with philosophers’ extravagant thought experiments, I know. I guess Quine’s implicit challenge with his “undetached rabbit parts” and so on is to come up with a clear (and, of course, naturalistic) criterion which would show the translation to be wrong. Simplicity considerations, as you suggest, may do it, but I’m not so sure.
I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour—just that these theories aren’t straight-forwardly continuous with the theories of natural science.
In case you haven’t encountered it and may be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour has been considered in some depth by Donald Davidson, eg in his [http://books.google.com/books/about/Essays_on_actions_and_events.html?id=Bj2HHI0c2RIC Essays on Actions and Events]
I appreciate this clarification. The point is indeed meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour—just that these theories aren’t straight-forwardly continuous with the theories of natural science.
In case you haven’t encountered it and may be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour (that is, the interface between natural science and the study of rationality) has been considered in some depth by a number of people including notably Donald Davidson, eg in his (Essays on Actions and Events)[http://books.google.com/books/about/Essays_on_actions_and_events.html?id=Bj2HHI0c2RIC].