Consider then a virus particle … Surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It’s true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).
No. The distinction between those viral behaviors that tend to contribute to the virus replicating and those viral behaviors that do not contribute does issue from science. It is not a metaphor to call actions that detract from reproduction “mistakes” on the part of the virus, any more than it is a metaphor to call certain kinds of chemical reactions “exothermic”. There is no ‘open question’ issue here—“mistake”, like “exothermic”, does not have any prior metaphysical meaning. We are free to define it as we wish, naturalistically.
So much for the practical ought, the version of ought for which ought not is called a mistake because it generates consequences contrary to the agent’s interests. What about the moral ought, the version of ought for which ought not is called wrong? Can we also define this kind of ought naturalistically? I think that we can, because once again I deny that “wrong” has any prior metaphysical meaning. The trick is to make the new (by definition) meaning not clash too harshly with the existing metaphysical connotations.
How is this for a first attempt at a naturalistic definition of the moral ought as a subset of the practical ought? An agent morally ought not to do something iff it tends to generate consequences contrary to the agent’s interests, those negative consequences arising from the reactions of disapproval coming from other agents.
In general, it is not difficult at all to define either kind of ought naturalistically, so long as one is not already metaphysically committed to the notion that the word ‘ought’ has a prior metaphysical meaning.
There is no ‘open question’ issue here—“mistake”, like “exothermic”, does not have any prior metaphysical meaning. We are free to define it as we wish, naturalistically.
I’m having trouble with the word “metaphysical”. In order for me to make sense of the claim that “mistake” and “exothermic” do not have prior metaphysical meanings, I would like to see some examples of words that do have prior metaphysical meanings, so that I can try to figure out from contrasting examples of having and not having prior metaphysical meanings what it means to have a prior metaphysical meaning. Because at the moment I don’t know what you’re talking about.
Hmmm. I may be using “metaphysical” inappropriately here. I confess that I am currently reading something that uses “metaphysical” as a general term of deprecation, so some of that may have worn off. :)
Let me try to answer your excellent question by analogy to geometry, without abandoning “metaphysical”. As is well known, in geometry, many technical terms are given definitions, but it is impossible to define every technical term. Some terms (point, line, and on are examples) are left undefined, though their meanings is supplied implicitly by way of axioms. Undefined terms in mathematics correspond (in this analogy) to words with prior metaphysical meaning in philosophical discourse. You can’t define them, because their meaning is somehow “built in”.
To give a rather trivial example, when trying to generate a naturalistic definition of ought, we usually assume we have a prior metaphysical meaning for is.
An agent morally ought not to do something iff it tends to generate consequences contrary to the agent’s interests, those negative consequences arising from the reactions of disapproval coming from other agents.
That doesn’t work. It would mean conformists are always in the right, irrespective of what they are conforming to.
As you may have noticed, that definition was labeled as a “first attempt”. It captures some of our intuitions about morality, but not all. In particular, its biggest weakness is that it fails to satisfy moral realists for precisely the reason you point out.
I have a second quill in my quiver. But before using it, I’m going to split the concept of morality into two pieces. One piece is called “de facto morality”. I claim that the definition I provided in the grandparent is a proper reductionist definition of de facto morality and captures many of (some) people’s intuitions about morality. The second piece is called “ideal morality”. This piece is essentially what de facto morality ought to be.
So, your conformist may well be automatically in the right with respect to de facto morality. But it is possible for a moral reformer to point out that he and all of his fellows are in the wrong with respect to ideal morality. That is, the reformer claims that the society would be better off if its de facto conventions were amended from their present unsatisfactory status to become more like the ideal. And, I claim, given the right definition of “society would be better off”, this “ideal morality” can be given an objective and naturalistic definition.
For more details, see Binmore—Game Theory and the Social Contract
Not exactly. It means that conformists are never morally wrong, unless some group (probably one that they’re not conforming with) punishes them for conforming. They can be morally neutral when conforming, and may be rationally wrong at the same time.
Can we also define this kind of ought naturalistically? I think that we can, because once again I deny that “wrong” has any prior metaphysical meaning. The trick is to make the new (by definition) meaning not clash too harshly with the existing metaphysical connotations
The main trick seems to be getting people to agree on a definition. For instance this:.
How is this for a first attempt at a naturalistic definition of the moral ought as a subset of the practical ought? An agent morally ought not to do something iff it tends to generate consequences contrary to the agent’s interests, those negative consequences arising from the reactions of disapproval coming from other agents.
...aims rather low. That just tells people to do what they would do anyway. Part of the social function of morality is to give people an ideal to personally aim towards. Another part of the social function of morality is to provide people with an ideal form of behaviour, in order to manipulate others into behaving “better”. Another part of the social function of morality is to allow people to signal their goodness by broadcasting their moral code. Done right, that makes them seem more trustworthy and predictable. Your proposal does not score very well on these fronts.
I think this is right, except possibly for the part about no prior metaphysical meaning. The later explanation of that part didn’t clarify it for me. Instead, I’ll just indicate what prior meaning I find attached to the idea that “the virus replicated wrongly.”
In biology, the idea that organs and behaviors and so on have functions is quite common and useful. The novice medical student can make many correct inferences about the heart by supposing that its function is to pump blood, for example. The idea preceded Darwin, but post-Darwin, we can give a proper naturalistic reduction for it. Roughly speaking, an organ’s function is F iff in the ancestral environment, the organ’s performance of F is what it was selected for. Various RNA features in a virus might have functions in this sense, and if so, that gives the meaning of saying that in a particular case, the viral reproduction mechanism failed to operate correctly.
That’s not a moral norm. It’s not even the kind of norm relating to an agent’s interests, in my view. But it is a norm.
There was a pre-existing meaning of “biological function” before Darwin came around. So, a Darwinian definition of biological function was not a purely stipulative one. It succeeded only because it captured enough of the tentatively or firmly accepted notions about “biological function” to make reasonably good sense of all that.
… except possibly for the part about no prior metaphysical meaning.
I think I see the source of the difficulty now. My fault. BobTheBob mentioned the mistake of replicating with errors. I took this to be just one example of a possible mistake by a virus, and thought of several more—inserting into the wrong species of host, for example, or perhaps incorporating an instance of the wrong peptide into the viral shell after replicating the viral genome.
I then sought to define ‘mistake’ to capture the common fitness-lowering feature of all these possible mistakes. However, I did not make clear what I was doing and my readers naturally thought I was still dealing with a replication error as the only kind of mistake.
If I bet higher than 1/6th on a fair die’s rolling 6 because in the last ten rolls 6 hasn’t come up -meaning it’s now ‘due’- I make a mistake. I commit an error of reasoning; I do something wrong; I act in a manner I ought not to.
What about the virus particle which, in the course of sloshing about in an appropriate medium, participates in the coming into existence of a particle composed of RNA which, as it happens, is mostly identical but differs from itself in a few places. Are you saying that this particle makes a mistake in the same sense of ‘mistake’ as I do in making my bet?
Option (1): The sense is precisely the same (and it is unproblematically naturalistic). In this case I have to ask what the principles are by which one infers to conclusions about a virus’s mistakes from facts about replication. What are the physical laws, how are their consequences (the consequences, again, being claims about what a virus ought to do) measured or verified, and so on?
Option (2): The senses are different. This was the point of calling the RNA mistake metaphorical. It was to convey that the sense is importantly different than it is in the betting case. The idea is that the sense, if any, in which a virus makes a ‘mistake’ in giving rise to a non-exact replica of itself is not enough to sustain the kind of norms required for rationality. It is not enough to sustain the conclusions about my betting behaviour. Is this fair?
Not really. You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable. Then, when I suggested the standard evolutionary explanation for the illusion of teleology in nature, you shifted the playing field. In option 1, you demand that I supply standard scientific expositions of the natural history of your chosen biological examples. In option 2 you suggest that you were just kidding in even mentioning viruses, bacteria and nematodes. Unless an organism has the cognitive equipment to make mistakes in probability theory, you simply are not interested in speaking about it normatively.
Do I understand that you are claiming that humans are qualitatively exceptional in the animal kingdom because the word “ought” is uniquely applicable to humans? If so, let me suggest a parallel sequence to the one you suggested starting from viruses. Zygote, blastula, fetus, infant, toddler, teenager, adult. Do you believe it is possible to tell a teenager what she “ought” to do? At what stage in development do normative judgements become applicable.
I appreciate your efforts to spell things out. I have to say I’m getting confused, though
You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable.
I meant to say that at no stage -including the last!- does the addition of merely naturalistic properties turn a thing into something subject to norms -something of which it is right to say it ought, for its own sake, to do this or that.
I also said that the sense of right and wrong and of purpose which biology provides is merely metaphorical. When you talk about “the illusion of teleology in nature”, that’s exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not—it’s real. Can you live with this? I think a lot of people are apt to think that illusory teleology sort of fades into the real thing with increasing physical complexity. I see the pull of this idea, but I think it’s mistaken, and I hope I’ve at least suggested that adherents of the view have some burden to try to defend it.
Do you believe it is possible to tell a teenager what she “ought” to do?
Now that is a whole other can of worms...
At what stage in development do normative judgements become applicable.
This is a fair and a difficult question. Roughly, another individual becomes suitable for normative appraisal when and to the extent that s/he becomes a recognizably rational agent -ie, capable of thinking and acting for her/himself and contributing to society (again, very roughly). All kinds of interesting moral issues lurk here, but I don’t think we have to jump to any conclusions about them.
In case I’m giving the wrong impression, I don’t mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I’m not giving a theory of the nature of norms—that’s just too hard. All I’m saying for the moment is that if you stick to purely natural science, you won’t find a place for them.
When you talk about “the illusion of teleology in nature”, that’s exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not—it’s real. Can you live with this?
The usual trick is to just call it teleonomy. Teleonomy is teleology with smart pants on.
Similar is the Dawkins distinction between designed and designoid objects.
Personally I was OK with “teleonomy” and “designed”. Biologists get pushed into this sort of thing by the literal-minded nit-pickers.
...teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not—it’s real. Can you live with this?
No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature.
My apologies for using the phrase “illusion of teleology in nature”. It seems to have created confusion. Tabooing that use of the word “teleology”, what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of the word, on the other hand, in your phrase “the kind of teleology needed to make sense of rationality” leads elsewhere. I would taboo and translate that use to yield something like “To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand ‘purpose’, in that sense, to understand rationality.”
Now if this is what you mean, then I agree with you. But I think I understand this kind of purpose, identifying it as the cognitive version of something like “being instrumental to survival and reproduction”. That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction. At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: “I’m horny; how about you?”. I don’t see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.
In case I’m giving the wrong impression, I don’t mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I’m not giving a theory of the nature of norms—that’s just too hard. All I’m saying for the moment is that if you stick to purely natural science, you won’t find a place for them.
Let me try putting that in different words: “Norms are in the eye of the beholder. Natural science tries to be objective—to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter.” If that is what you are saying, I may come close to agreeing with you. But somehow, I don’t think that is what you are saying.
I would taboo and translate that use to yield something like “To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand ‘purpose’, in that sense, to understand rationality.”
Thanks, yes. This is very clear. I can buy this.
But I think I understand this kind of purpose, identifying it as the cognitive version of something like “being instrumental to survival and reproduction”. That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction.
Sorry if I’m slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They’re the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.
Here’s one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it’s false.
On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X’s mattering to a thing, or of a thing’s caring about X, and provide me detailed evolutionary explanations of the behavioural correlates’ presence, but these correlates simply do not add up to the thing’s actually caring about X. X’s being important to a thing, X’s mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say.
If both hands seem false, I’d be interested to hear that, too.
At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: “I’m horny; how about you?”. I don’t see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.
As soon as we start to talk about symbols and representation, I’m concerned that a whole new set of very thorny issues get introduced. I will shy away from these.
Let me try putting that in different words: “Norms are in the eye of the beholder. Natural science tries to be objective—to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter.” If that is what you are saying, I may come close to agreeing with you. But somehow, I don’t think that is what you are saying.
“It requires a different, non-reductionist … way of looking at the subject matter.” -I can agree with you completely on this. (I do want however to resist the subjective, “observer dependent” part )
Sorry if I’m slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They’re the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.
Humans have brains, and can better represent future goal states. However, “purpose” in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm—but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too—it is just that they are not so good at it.
You use a fair bit of normative, teleological vocabulary, here: ‘purpose’, ‘goal’, ‘success’, ‘optimisation’, ‘trying’, being ‘good’ at ‘steering’ the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?
To make sense of rationality, we need claims such as,
One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).
If you translate this statement, substituting for ‘ought’ the details of the teleonomic ‘ersatz’ correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one’s ancestor’s behaviours and their relation to those ancestors’ survival chances (all with no norms).
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?
This is a nice way of putting things. As long as we’re clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.
Yes.Well, it helps with my crusade to show that objective morality can be based on pure reason (abstract reasoning is rather apt for dealing with ideals; it is much easier to reason about a perfect circle than a wobbly, hand-drawn one).
On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X’s mattering to a thing, or of a thing’s caring about X, and provide me detailed evolutionary explanations of the behavioural correlates’ presence, but these correlates simply do not add up to the thing’s actually caring about X. X’s being important to a thing, X’s mattering, is more than a question of mere behaviour or computation
I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order. …
As soon as we start to talk about symbols and representation, I’m concerned that a whole new set of very thorny issues get introduced. I will shy away from these.
My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy.
Which is not so say that humans are different from viruses only in degree. The are different in quality with regard to some other issues involved in rationality. Cognitive issues. Symbol processing issues. Issues of intentionality. But not issues of pure purpose and telos. So why don’t you and I just shy away from this conversation. We’ve both stated our positions with sufficient clarity, I think.
No. The distinction between those viral behaviors that tend to contribute to the virus replicating and those viral behaviors that do not contribute does issue from science. It is not a metaphor to call actions that detract from reproduction “mistakes” on the part of the virus, any more than it is a metaphor to call certain kinds of chemical reactions “exothermic”. There is no ‘open question’ issue here—“mistake”, like “exothermic”, does not have any prior metaphysical meaning. We are free to define it as we wish, naturalistically.
So much for the practical ought, the version of ought for which ought not is called a mistake because it generates consequences contrary to the agent’s interests. What about the moral ought, the version of ought for which ought not is called wrong? Can we also define this kind of ought naturalistically? I think that we can, because once again I deny that “wrong” has any prior metaphysical meaning. The trick is to make the new (by definition) meaning not clash too harshly with the existing metaphysical connotations.
How is this for a first attempt at a naturalistic definition of the moral ought as a subset of the practical ought? An agent morally ought not to do something iff it tends to generate consequences contrary to the agent’s interests, those negative consequences arising from the reactions of disapproval coming from other agents.
In general, it is not difficult at all to define either kind of ought naturalistically, so long as one is not already metaphysically committed to the notion that the word ‘ought’ has a prior metaphysical meaning.
I’m having trouble with the word “metaphysical”. In order for me to make sense of the claim that “mistake” and “exothermic” do not have prior metaphysical meanings, I would like to see some examples of words that do have prior metaphysical meanings, so that I can try to figure out from contrasting examples of having and not having prior metaphysical meanings what it means to have a prior metaphysical meaning. Because at the moment I don’t know what you’re talking about.
Hmmm. I may be using “metaphysical” inappropriately here. I confess that I am currently reading something that uses “metaphysical” as a general term of deprecation, so some of that may have worn off. :)
Let me try to answer your excellent question by analogy to geometry, without abandoning “metaphysical”. As is well known, in geometry, many technical terms are given definitions, but it is impossible to define every technical term. Some terms (point, line, and on are examples) are left undefined, though their meanings is supplied implicitly by way of axioms. Undefined terms in mathematics correspond (in this analogy) to words with prior metaphysical meaning in philosophical discourse. You can’t define them, because their meaning is somehow “built in”.
To give a rather trivial example, when trying to generate a naturalistic definition of ought, we usually assume we have a prior metaphysical meaning for is.
Hope that helped.
That doesn’t work. It would mean conformists are always in the right, irrespective of what they are conforming to.
As you may have noticed, that definition was labeled as a “first attempt”. It captures some of our intuitions about morality, but not all. In particular, its biggest weakness is that it fails to satisfy moral realists for precisely the reason you point out.
I have a second quill in my quiver. But before using it, I’m going to split the concept of morality into two pieces. One piece is called “de facto morality”. I claim that the definition I provided in the grandparent is a proper reductionist definition of de facto morality and captures many of (some) people’s intuitions about morality. The second piece is called “ideal morality”. This piece is essentially what de facto morality ought to be.
So, your conformist may well be automatically in the right with respect to de facto morality. But it is possible for a moral reformer to point out that he and all of his fellows are in the wrong with respect to ideal morality. That is, the reformer claims that the society would be better off if its de facto conventions were amended from their present unsatisfactory status to become more like the ideal. And, I claim, given the right definition of “society would be better off”, this “ideal morality” can be given an objective and naturalistic definition.
For more details, see Binmore—Game Theory and the Social Contract
Not exactly. It means that conformists are never morally wrong, unless some group (probably one that they’re not conforming with) punishes them for conforming. They can be morally neutral when conforming, and may be rationally wrong at the same time.
The main trick seems to be getting people to agree on a definition. For instance this:.
...aims rather low. That just tells people to do what they would do anyway. Part of the social function of morality is to give people an ideal to personally aim towards. Another part of the social function of morality is to provide people with an ideal form of behaviour, in order to manipulate others into behaving “better”. Another part of the social function of morality is to allow people to signal their goodness by broadcasting their moral code. Done right, that makes them seem more trustworthy and predictable. Your proposal does not score very well on these fronts.
I think this is right, except possibly for the part about no prior metaphysical meaning. The later explanation of that part didn’t clarify it for me. Instead, I’ll just indicate what prior meaning I find attached to the idea that “the virus replicated wrongly.”
In biology, the idea that organs and behaviors and so on have functions is quite common and useful. The novice medical student can make many correct inferences about the heart by supposing that its function is to pump blood, for example. The idea preceded Darwin, but post-Darwin, we can give a proper naturalistic reduction for it. Roughly speaking, an organ’s function is F iff in the ancestral environment, the organ’s performance of F is what it was selected for. Various RNA features in a virus might have functions in this sense, and if so, that gives the meaning of saying that in a particular case, the viral reproduction mechanism failed to operate correctly.
That’s not a moral norm. It’s not even the kind of norm relating to an agent’s interests, in my view. But it is a norm.
There was a pre-existing meaning of “biological function” before Darwin came around. So, a Darwinian definition of biological function was not a purely stipulative one. It succeeded only because it captured enough of the tentatively or firmly accepted notions about “biological function” to make reasonably good sense of all that.
I think I see the source of the difficulty now. My fault. BobTheBob mentioned the mistake of replicating with errors. I took this to be just one example of a possible mistake by a virus, and thought of several more—inserting into the wrong species of host, for example, or perhaps incorporating an instance of the wrong peptide into the viral shell after replicating the viral genome.
I then sought to define ‘mistake’ to capture the common fitness-lowering feature of all these possible mistakes. However, I did not make clear what I was doing and my readers naturally thought I was still dealing with a replication error as the only kind of mistake.
Sorry to have caused this confusion.
If I bet higher than 1/6th on a fair die’s rolling 6 because in the last ten rolls 6 hasn’t come up -meaning it’s now ‘due’- I make a mistake. I commit an error of reasoning; I do something wrong; I act in a manner I ought not to.
What about the virus particle which, in the course of sloshing about in an appropriate medium, participates in the coming into existence of a particle composed of RNA which, as it happens, is mostly identical but differs from itself in a few places. Are you saying that this particle makes a mistake in the same sense of ‘mistake’ as I do in making my bet?
Option (1): The sense is precisely the same (and it is unproblematically naturalistic). In this case I have to ask what the principles are by which one infers to conclusions about a virus’s mistakes from facts about replication. What are the physical laws, how are their consequences (the consequences, again, being claims about what a virus ought to do) measured or verified, and so on?
Option (2): The senses are different. This was the point of calling the RNA mistake metaphorical. It was to convey that the sense is importantly different than it is in the betting case. The idea is that the sense, if any, in which a virus makes a ‘mistake’ in giving rise to a non-exact replica of itself is not enough to sustain the kind of norms required for rationality. It is not enough to sustain the conclusions about my betting behaviour. Is this fair?
Not really. You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable. Then, when I suggested the standard evolutionary explanation for the illusion of teleology in nature, you shifted the playing field. In option 1, you demand that I supply standard scientific expositions of the natural history of your chosen biological examples. In option 2 you suggest that you were just kidding in even mentioning viruses, bacteria and nematodes. Unless an organism has the cognitive equipment to make mistakes in probability theory, you simply are not interested in speaking about it normatively.
Do I understand that you are claiming that humans are qualitatively exceptional in the animal kingdom because the word “ought” is uniquely applicable to humans? If so, let me suggest a parallel sequence to the one you suggested starting from viruses. Zygote, blastula, fetus, infant, toddler, teenager, adult. Do you believe it is possible to tell a teenager what she “ought” to do? At what stage in development do normative judgements become applicable.
Here is a cite for sorites. Couldn’t resist the pun.
I appreciate your efforts to spell things out. I have to say I’m getting confused, though
I meant to say that at no stage -including the last!- does the addition of merely naturalistic properties turn a thing into something subject to norms -something of which it is right to say it ought, for its own sake, to do this or that.
I also said that the sense of right and wrong and of purpose which biology provides is merely metaphorical. When you talk about “the illusion of teleology in nature”, that’s exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not—it’s real. Can you live with this? I think a lot of people are apt to think that illusory teleology sort of fades into the real thing with increasing physical complexity. I see the pull of this idea, but I think it’s mistaken, and I hope I’ve at least suggested that adherents of the view have some burden to try to defend it.
Now that is a whole other can of worms...
This is a fair and a difficult question. Roughly, another individual becomes suitable for normative appraisal when and to the extent that s/he becomes a recognizably rational agent -ie, capable of thinking and acting for her/himself and contributing to society (again, very roughly). All kinds of interesting moral issues lurk here, but I don’t think we have to jump to any conclusions about them.
In case I’m giving the wrong impression, I don’t mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I’m not giving a theory of the nature of norms—that’s just too hard. All I’m saying for the moment is that if you stick to purely natural science, you won’t find a place for them.
The usual trick is to just call it teleonomy. Teleonomy is teleology with smart pants on.
Thanks for this—I hadn’t encountered this concept. Looks very useful.
Similar is the Dawkins distinction between designed and designoid objects. Personally I was OK with “teleonomy” and “designed”. Biologists get pushed into this sort of thing by the literal-minded nit-pickers.
No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature.
My apologies for using the phrase “illusion of teleology in nature”. It seems to have created confusion. Tabooing that use of the word “teleology”, what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of the word, on the other hand, in your phrase “the kind of teleology needed to make sense of rationality” leads elsewhere. I would taboo and translate that use to yield something like “To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand ‘purpose’, in that sense, to understand rationality.”
Now if this is what you mean, then I agree with you. But I think I understand this kind of purpose, identifying it as the cognitive version of something like “being instrumental to survival and reproduction”. That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction. At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: “I’m horny; how about you?”. I don’t see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.
Let me try putting that in different words: “Norms are in the eye of the beholder. Natural science tries to be objective—to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter.” If that is what you are saying, I may come close to agreeing with you. But somehow, I don’t think that is what you are saying.
Thanks, yes. This is very clear. I can buy this.
Sorry if I’m slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They’re the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.
Here’s one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it’s false.
On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X’s mattering to a thing, or of a thing’s caring about X, and provide me detailed evolutionary explanations of the behavioural correlates’ presence, but these correlates simply do not add up to the thing’s actually caring about X. X’s being important to a thing, X’s mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say.
If both hands seem false, I’d be interested to hear that, too.
As soon as we start to talk about symbols and representation, I’m concerned that a whole new set of very thorny issues get introduced. I will shy away from these.
“It requires a different, non-reductionist … way of looking at the subject matter.” -I can agree with you completely on this. (I do want however to resist the subjective, “observer dependent” part )
Humans have brains, and can better represent future goal states. However, “purpose” in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm—but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too—it is just that they are not so good at it.
You use a fair bit of normative, teleological vocabulary, here: ‘purpose’, ‘goal’, ‘success’, ‘optimisation’, ‘trying’, being ‘good’ at ‘steering’ the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?
To make sense of rationality, we need claims such as,
One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).
If you translate this statement, substituting for ‘ought’ the details of the teleonomic ‘ersatz’ correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one’s ancestor’s behaviours and their relation to those ancestors’ survival chances (all with no norms).
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.
Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?
This is a nice way of putting things. As long as we’re clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.
Do you think this helps the cause of naturalism?
Yes.Well, it helps with my crusade to show that objective morality can be based on pure reason (abstract reasoning is rather apt for dealing with ideals; it is much easier to reason about a perfect circle than a wobbly, hand-drawn one).
What is missing? A quale?
My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy.
Which is not so say that humans are different from viruses only in degree. The are different in quality with regard to some other issues involved in rationality. Cognitive issues. Symbol processing issues. Issues of intentionality. But not issues of pure purpose and telos. So why don’t you and I just shy away from this conversation. We’ve both stated our positions with sufficient clarity, I think.