Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
So physical differences can matter, but among healthy brains, they almost always don’t.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch?
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I don’t know if you’re missing anything.
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.