I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support.
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role.
This is still my position.
(Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
there is nothing that thinks that cannot use language, and everything that can use language can to that extent think
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
So physical differences can matter, but among healthy brains, they almost always don’t.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch?
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
So how can we fill out this reasoning?
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
This is still my position.
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I don’t know if you’re missing anything.
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.