No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam’s judgements work is knowing something about this judgement.
None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you “Dave, you don’t know the content of B”, you ought to reply “Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it’s something you would judge to be true on the basis of a shared set of beliefs.”
Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone’s set of beliefs. Even if there’s any distinction here (i.e. if we’re foundationalists of some kind), it still doesn’t follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.
Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
So, I’m not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I’m saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
I hope we can agree that in common usage, it’s unproblematic for me to say that I don’t know what color your socks are. I don’t, in fact, know what color your socks are. I don’t even know that you’re wearing socks.
But, sure, I think it’s more probable that your socks (if you’re wearing them) are white than that they’re purple, and that they probably aren’t transparent, and that they probably aren’t pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.
And, sure, if you’re thinking “my socks are purple” and I’m thinking “Abrooks’ socks probably aren’t transparent,” these kinds of knowledge aren’t wholly unrelated to one another. But that doesn’t mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.
Much as you think I’m drawing arbitrary distinctions, I think you’re eliding over real distinctions.
Okay, so it sounds like we’re agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?
If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.
If we’re on the same page so far, then we’ve agreed that you can’t recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
Yes, my reasons for believing B are, in the very limited sense we’re now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).
Yes, agreed that if I think something is thinking, I know something about the content of its thought.
Further agreed that in the highly extended sense that you’re using “understanding”—the same sense that I can be said to “know” what color socks you’re wearing—I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it’s being a thought.
So, OK… you’ve proven your point.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
Oh, come on, this has been a very interesting discussion. And I don’t take myself to have proven any sort of point. Basically, if we’ve agreed to all of the above, then we still have to address the original point about precision.
Now, I don’t have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let’s assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn’t actually logically alien.
This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you’ve failed, you have taken the first few steps already.
As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don’t know. I think that if we encountered such thought, we would pretty much only have reason to think that it’s not thought.
So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I’ve utterly failed to convince you, after all, I would take that as evidence against my point.
I would summarize your argument as “If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren’t mutually intelligible in the general case, we can’t recognize them as thinking.”
My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it’s likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it’s perfectly safe, but I wouldn’t recommend playing.)
My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)
But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn’t thinking after all, rather than concluding that its thinking is simply alien to me.
I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support.
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role.
This is still my position.
(Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
there is nothing that thinks that cannot use language, and everything that can use language can to that extent think
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
So physical differences can matter, but among healthy brains, they almost always don’t.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch?
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam’s judgements work is knowing something about this judgement.
None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you “Dave, you don’t know the content of B”, you ought to reply “Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it’s something you would judge to be true on the basis of a shared set of beliefs.”
Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone’s set of beliefs. Even if there’s any distinction here (i.e. if we’re foundationalists of some kind), it still doesn’t follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.
So, I’m not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I’m saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
I hope we can agree that in common usage, it’s unproblematic for me to say that I don’t know what color your socks are. I don’t, in fact, know what color your socks are. I don’t even know that you’re wearing socks.
But, sure, I think it’s more probable that your socks (if you’re wearing them) are white than that they’re purple, and that they probably aren’t transparent, and that they probably aren’t pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.
And, sure, if you’re thinking “my socks are purple” and I’m thinking “Abrooks’ socks probably aren’t transparent,” these kinds of knowledge aren’t wholly unrelated to one another. But that doesn’t mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.
Much as you think I’m drawing arbitrary distinctions, I think you’re eliding over real distinctions.
Okay, so it sounds like we’re agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?
If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.
If we’re on the same page so far, then we’ve agreed that you can’t recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
Yes, my reasons for believing B are, in the very limited sense we’re now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).
Yes, agreed that if I think something is thinking, I know something about the content of its thought.
Further agreed that in the highly extended sense that you’re using “understanding”—the same sense that I can be said to “know” what color socks you’re wearing—I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it’s being a thought.
So, OK… you’ve proven your point.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
Oh, come on, this has been a very interesting discussion. And I don’t take myself to have proven any sort of point. Basically, if we’ve agreed to all of the above, then we still have to address the original point about precision.
Now, I don’t have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let’s assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn’t actually logically alien.
This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you’ve failed, you have taken the first few steps already.
As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don’t know. I think that if we encountered such thought, we would pretty much only have reason to think that it’s not thought.
So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I’ve utterly failed to convince you, after all, I would take that as evidence against my point.
My position on this hasn’t changed, really.
I would summarize your argument as “If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren’t mutually intelligible in the general case, we can’t recognize them as thinking.”
My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it’s likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it’s perfectly safe, but I wouldn’t recommend playing.)
My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)
But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn’t thinking after all, rather than concluding that its thinking is simply alien to me.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
So how can we fill out this reasoning?
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
This is still my position.
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I don’t know if you’re missing anything.
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.