...teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not—it’s real. Can you live with this?
No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature.
My apologies for using the phrase “illusion of teleology in nature”. It seems to have created confusion. Tabooing that use of the word “teleology”, what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of the word, on the other hand, in your phrase “the kind of teleology needed to make sense of rationality” leads elsewhere. I would taboo and translate that use to yield something like “To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand ‘purpose’, in that sense, to understand rationality.”
Now if this is what you mean, then I agree with you. But I think I understand this kind of purpose, identifying it as the cognitive version of something like “being instrumental to survival and reproduction”. That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction. At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: “I’m horny; how about you?”. I don’t see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.
In case I’m giving the wrong impression, I don’t mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I’m not giving a theory of the nature of norms—that’s just too hard. All I’m saying for the moment is that if you stick to purely natural science, you won’t find a place for them.
Let me try putting that in different words: “Norms are in the eye of the beholder. Natural science tries to be objective—to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter.” If that is what you are saying, I may come close to agreeing with you. But somehow, I don’t think that is what you are saying.
I would taboo and translate that use to yield something like “To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand ‘purpose’, in that sense, to understand rationality.”
Thanks, yes. This is very clear. I can buy this.
But I think I understand this kind of purpose, identifying it as the cognitive version of something like “being instrumental to survival and reproduction”. That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction.
Sorry if I’m slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They’re the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.
Here’s one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it’s false.
On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X’s mattering to a thing, or of a thing’s caring about X, and provide me detailed evolutionary explanations of the behavioural correlates’ presence, but these correlates simply do not add up to the thing’s actually caring about X. X’s being important to a thing, X’s mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say.
If both hands seem false, I’d be interested to hear that, too.
At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: “I’m horny; how about you?”. I don’t see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.
As soon as we start to talk about symbols and representation, I’m concerned that a whole new set of very thorny issues get introduced. I will shy away from these.
Let me try putting that in different words: “Norms are in the eye of the beholder. Natural science tries to be objective—to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter.” If that is what you are saying, I may come close to agreeing with you. But somehow, I don’t think that is what you are saying.
“It requires a different, non-reductionist … way of looking at the subject matter.” -I can agree with you completely on this. (I do want however to resist the subjective, “observer dependent” part )
Sorry if I’m slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They’re the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.
Humans have brains, and can better represent future goal states. However, “purpose” in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm—but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too—it is just that they are not so good at it.
You use a fair bit of normative, teleological vocabulary, here: ‘purpose’, ‘goal’, ‘success’, ‘optimisation’, ‘trying’, being ‘good’ at ‘steering’ the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?
To make sense of rationality, we need claims such as,
One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).
If you translate this statement, substituting for ‘ought’ the details of the teleonomic ‘ersatz’ correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one’s ancestor’s behaviours and their relation to those ancestors’ survival chances (all with no norms).
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?
This is a nice way of putting things. As long as we’re clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.
Yes.Well, it helps with my crusade to show that objective morality can be based on pure reason (abstract reasoning is rather apt for dealing with ideals; it is much easier to reason about a perfect circle than a wobbly, hand-drawn one).
On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X’s mattering to a thing, or of a thing’s caring about X, and provide me detailed evolutionary explanations of the behavioural correlates’ presence, but these correlates simply do not add up to the thing’s actually caring about X. X’s being important to a thing, X’s mattering, is more than a question of mere behaviour or computation
I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order. …
As soon as we start to talk about symbols and representation, I’m concerned that a whole new set of very thorny issues get introduced. I will shy away from these.
My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy.
Which is not so say that humans are different from viruses only in degree. The are different in quality with regard to some other issues involved in rationality. Cognitive issues. Symbol processing issues. Issues of intentionality. But not issues of pure purpose and telos. So why don’t you and I just shy away from this conversation. We’ve both stated our positions with sufficient clarity, I think.
No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature.
My apologies for using the phrase “illusion of teleology in nature”. It seems to have created confusion. Tabooing that use of the word “teleology”, what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of the word, on the other hand, in your phrase “the kind of teleology needed to make sense of rationality” leads elsewhere. I would taboo and translate that use to yield something like “To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand ‘purpose’, in that sense, to understand rationality.”
Now if this is what you mean, then I agree with you. But I think I understand this kind of purpose, identifying it as the cognitive version of something like “being instrumental to survival and reproduction”. That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction. At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: “I’m horny; how about you?”. I don’t see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.
Let me try putting that in different words: “Norms are in the eye of the beholder. Natural science tries to be objective—to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter.” If that is what you are saying, I may come close to agreeing with you. But somehow, I don’t think that is what you are saying.
Thanks, yes. This is very clear. I can buy this.
Sorry if I’m slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They’re the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are ‘merely’ teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.
Here’s one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it’s false.
On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X’s mattering to a thing, or of a thing’s caring about X, and provide me detailed evolutionary explanations of the behavioural correlates’ presence, but these correlates simply do not add up to the thing’s actually caring about X. X’s being important to a thing, X’s mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say.
If both hands seem false, I’d be interested to hear that, too.
As soon as we start to talk about symbols and representation, I’m concerned that a whole new set of very thorny issues get introduced. I will shy away from these.
“It requires a different, non-reductionist … way of looking at the subject matter.” -I can agree with you completely on this. (I do want however to resist the subjective, “observer dependent” part )
Humans have brains, and can better represent future goal states. However, “purpose” in nature ultimately comes from an optimisation algorithm. That is usually differential reproductive success. Human brains run their own optimisation algorithm—but it was built by and reflects the goals of the reproducers that built it. I would be reluctant to dis bacterial purposes. They are trying to steer the future too—it is just that they are not so good at it.
You use a fair bit of normative, teleological vocabulary, here: ‘purpose’, ‘goal’, ‘success’, ‘optimisation’, ‘trying’, being ‘good’ at ‘steering’ the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?
To make sense of rationality, we need claims such as,
One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).
If you translate this statement, substituting for ‘ought’ the details of the teleonomic ‘ersatz’ correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one’s ancestor’s behaviours and their relation to those ancestors’ survival chances (all with no norms).
This latter complicated statement will not mean what the first statement means, and won’t do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what’s needed is a prescription.
Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.
Do you accept that a description of what an ideal agent does is equivalent to a prescription of what a non-ideal agent (of the same goals) should do?
This is a nice way of putting things. As long as we’re clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.
Do you think this helps the cause of naturalism?
Yes.Well, it helps with my crusade to show that objective morality can be based on pure reason (abstract reasoning is rather apt for dealing with ideals; it is much easier to reason about a perfect circle than a wobbly, hand-drawn one).
What is missing? A quale?
My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy.
Which is not so say that humans are different from viruses only in degree. The are different in quality with regard to some other issues involved in rationality. Cognitive issues. Symbol processing issues. Issues of intentionality. But not issues of pure purpose and telos. So why don’t you and I just shy away from this conversation. We’ve both stated our positions with sufficient clarity, I think.