From an epistemic position, the proposition P1: “Dave’s mind is capable of thinking the thought that A1 and A2 shared” is experimentally unfalsifiable.
I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A’s were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn’t think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.
So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave’s opinion that the aliens are thinking is irrational, even if it is true.
Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.
suppose Dave were a propositional logic machine, and the A’s were first order logic machines. [..] (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.
I’m not sure how we would determine experimentally that they were true, though. I wouldn’t normally care, but you made such a point a moment ago about the importance of your claim being about what’s knowable rather than about what’s true that I’m not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.
That, again, is not my point.
Then I suppose we can safely ignore it for now.
Dave could never have reasons for thinking that he couldn’t think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do.
As I’ve already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I’m incapable of doing so.
So suppose Dave has understood that the aliens are thinking.
Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?
By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
I’m willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of “world”, “largely”, and “relevant”. Before I lean too heavily on any of that I’d want to clarify those words further, but I’m not sure it actually matters.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action
I don’t agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me “B is true,” I have reason to think B is true but I don’t know the content of B.
then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
The premise is false, but I agree that were it true your conclusion would follow.
I have reason to think B is true but I don’t know the content of B.
This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do.
So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn’t so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you’ve got a lot of beliefs about what B is, without knowing the specifics.
Essentially, your inference that B is true because Sam says that it is, is the belief that though you don’t know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.
In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B.
(ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I’m saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can’t have the assumptions necessary to set up your examples.)
This seems to be a crucial disagreement, so we should settle it first.
All right.
you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs
Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes.
Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don’t interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible… I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I’m willing to go along with it for now.
Agreed so far.
you think Sam makes judgements roughly in the same ways you do.
No, I don’t follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I’m not sure this matters to your argument.
So, you mostly understand the kinds of inferences Sam draws
Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself.
you mostly understand the beliefs that Sam has
Yes, in the same ways.
If you infer from this that B is true because Sam says that it is, you must be assuming that B isn’t so odd belief that Sam has no competence in assessing it.
Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I’m confident it isn’t true.
you’ve got a lot of beliefs about what B is, without knowing the specifics.
Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies.
Essentially, your inference that B is true because Sam says that it is, is the belief that though you don’t know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.
Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report “B1 is true,” the prior probability that I already know B1 is high.
But this is of course in no sense guaranteed. For example, B might be “I’m wearing purple socks,” in response to which Sam checks the color of your socks, and subsequently reports to me that B is true. In this case I don’t in fact know what color socks you are wearing.
In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs).
Again, statistically speaking, sure.
Thinking that B is probably true just is believing you know something about B.
No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
If X smells good, I have reason to believe that X tastes good, because most things that smell good also taste good. But it is quite possible for me to both smell and taste X and conclude “X smells good and tastes bad.” If “thinking that X smells good just is believing that X tastes good” were true, I would at that point also believe “X tastes good and tastes bad,” which is not in fact what happens. Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
Similarly, if Sam reports B as true, I have good reason to think B is probably true, and I also have good reason to think I know something important about the content of B (e.g., that it is or follows from one of my own beliefs), because most things that Sam would report as true I also know something important about the contents of (e.g., ibid). But it’s quite possible for Sam to report B as true without me knowing anything important about the content of B. I similarly conclude that “thinking that B is probably true just is believing [I] know something [important] about B” is false.
In case it matters, not only is it possible for me to believe B is true when I don’t in fact know the content of B (e.g., B is “Abrooks’ socks are purple” and Sam checks your socks and tells me “B is true” when I neither know what B says nor know that Abrooks’ socks are purple), it’s also possible for me to have good reason to believe that I don’t know the content of B in this situation (e.g., if Sam further tells me “Dave, you don’t know the content of B”… which in fact I don’t, and Sam has good reason to believe I don’t.)
No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam’s judgements work is knowing something about this judgement.
None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you “Dave, you don’t know the content of B”, you ought to reply “Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it’s something you would judge to be true on the basis of a shared set of beliefs.”
Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone’s set of beliefs. Even if there’s any distinction here (i.e. if we’re foundationalists of some kind), it still doesn’t follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.
Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
So, I’m not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I’m saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
I hope we can agree that in common usage, it’s unproblematic for me to say that I don’t know what color your socks are. I don’t, in fact, know what color your socks are. I don’t even know that you’re wearing socks.
But, sure, I think it’s more probable that your socks (if you’re wearing them) are white than that they’re purple, and that they probably aren’t transparent, and that they probably aren’t pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.
And, sure, if you’re thinking “my socks are purple” and I’m thinking “Abrooks’ socks probably aren’t transparent,” these kinds of knowledge aren’t wholly unrelated to one another. But that doesn’t mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.
Much as you think I’m drawing arbitrary distinctions, I think you’re eliding over real distinctions.
Okay, so it sounds like we’re agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?
If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.
If we’re on the same page so far, then we’ve agreed that you can’t recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
Yes, my reasons for believing B are, in the very limited sense we’re now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).
Yes, agreed that if I think something is thinking, I know something about the content of its thought.
Further agreed that in the highly extended sense that you’re using “understanding”—the same sense that I can be said to “know” what color socks you’re wearing—I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it’s being a thought.
So, OK… you’ve proven your point.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
Oh, come on, this has been a very interesting discussion. And I don’t take myself to have proven any sort of point. Basically, if we’ve agreed to all of the above, then we still have to address the original point about precision.
Now, I don’t have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let’s assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn’t actually logically alien.
This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you’ve failed, you have taken the first few steps already.
As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don’t know. I think that if we encountered such thought, we would pretty much only have reason to think that it’s not thought.
So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I’ve utterly failed to convince you, after all, I would take that as evidence against my point.
I would summarize your argument as “If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren’t mutually intelligible in the general case, we can’t recognize them as thinking.”
My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it’s likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it’s perfectly safe, but I wouldn’t recommend playing.)
My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)
But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn’t thinking after all, rather than concluding that its thinking is simply alien to me.
I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support.
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role.
This is still my position.
(Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
there is nothing that thinks that cannot use language, and everything that can use language can to that extent think
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
So physical differences can matter, but among healthy brains, they almost always don’t.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch?
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.
I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A’s were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn’t think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.
So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave’s opinion that the aliens are thinking is irrational, even if it is true.
Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.
Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.
I’m not sure how we would determine experimentally that they were true, though. I wouldn’t normally care, but you made such a point a moment ago about the importance of your claim being about what’s knowable rather than about what’s true that I’m not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.
Then I suppose we can safely ignore it for now.
As I’ve already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I’m incapable of doing so.
Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?
I’m willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of “world”, “largely”, and “relevant”. Before I lean too heavily on any of that I’d want to clarify those words further, but I’m not sure it actually matters.
I don’t agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me “B is true,” I have reason to think B is true but I don’t know the content of B.
The premise is false, but I agree that were it true your conclusion would follow.
This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do.
So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn’t so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you’ve got a lot of beliefs about what B is, without knowing the specifics.
Essentially, your inference that B is true because Sam says that it is, is the belief that though you don’t know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.
In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B.
(ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I’m saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can’t have the assumptions necessary to set up your examples.)
All right.
Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes.
Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don’t interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible… I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I’m willing to go along with it for now.
Agreed so far.
No, I don’t follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I’m not sure this matters to your argument.
Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself.
Yes, in the same ways.
Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I’m confident it isn’t true.
Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies.
Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report “B1 is true,” the prior probability that I already know B1 is high.
But this is of course in no sense guaranteed. For example, B might be “I’m wearing purple socks,” in response to which Sam checks the color of your socks, and subsequently reports to me that B is true. In this case I don’t in fact know what color socks you are wearing.
Again, statistically speaking, sure.
No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
If X smells good, I have reason to believe that X tastes good, because most things that smell good also taste good. But it is quite possible for me to both smell and taste X and conclude “X smells good and tastes bad.” If “thinking that X smells good just is believing that X tastes good” were true, I would at that point also believe “X tastes good and tastes bad,” which is not in fact what happens. Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
Similarly, if Sam reports B as true, I have good reason to think B is probably true, and I also have good reason to think I know something important about the content of B (e.g., that it is or follows from one of my own beliefs), because most things that Sam would report as true I also know something important about the contents of (e.g., ibid). But it’s quite possible for Sam to report B as true without me knowing anything important about the content of B. I similarly conclude that “thinking that B is probably true just is believing [I] know something [important] about B” is false.
In case it matters, not only is it possible for me to believe B is true when I don’t in fact know the content of B (e.g., B is “Abrooks’ socks are purple” and Sam checks your socks and tells me “B is true” when I neither know what B says nor know that Abrooks’ socks are purple), it’s also possible for me to have good reason to believe that I don’t know the content of B in this situation (e.g., if Sam further tells me “Dave, you don’t know the content of B”… which in fact I don’t, and Sam has good reason to believe I don’t.)
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam’s judgements work is knowing something about this judgement.
None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you “Dave, you don’t know the content of B”, you ought to reply “Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it’s something you would judge to be true on the basis of a shared set of beliefs.”
Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone’s set of beliefs. Even if there’s any distinction here (i.e. if we’re foundationalists of some kind), it still doesn’t follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.
So, I’m not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I’m saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
I hope we can agree that in common usage, it’s unproblematic for me to say that I don’t know what color your socks are. I don’t, in fact, know what color your socks are. I don’t even know that you’re wearing socks.
But, sure, I think it’s more probable that your socks (if you’re wearing them) are white than that they’re purple, and that they probably aren’t transparent, and that they probably aren’t pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.
And, sure, if you’re thinking “my socks are purple” and I’m thinking “Abrooks’ socks probably aren’t transparent,” these kinds of knowledge aren’t wholly unrelated to one another. But that doesn’t mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.
Much as you think I’m drawing arbitrary distinctions, I think you’re eliding over real distinctions.
Okay, so it sounds like we’re agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?
If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.
If we’re on the same page so far, then we’ve agreed that you can’t recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
Yes, my reasons for believing B are, in the very limited sense we’re now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).
Yes, agreed that if I think something is thinking, I know something about the content of its thought.
Further agreed that in the highly extended sense that you’re using “understanding”—the same sense that I can be said to “know” what color socks you’re wearing—I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it’s being a thought.
So, OK… you’ve proven your point.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
Oh, come on, this has been a very interesting discussion. And I don’t take myself to have proven any sort of point. Basically, if we’ve agreed to all of the above, then we still have to address the original point about precision.
Now, I don’t have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let’s assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn’t actually logically alien.
This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you’ve failed, you have taken the first few steps already.
As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don’t know. I think that if we encountered such thought, we would pretty much only have reason to think that it’s not thought.
So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I’ve utterly failed to convince you, after all, I would take that as evidence against my point.
My position on this hasn’t changed, really.
I would summarize your argument as “If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren’t mutually intelligible in the general case, we can’t recognize them as thinking.”
My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it’s likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it’s perfectly safe, but I wouldn’t recommend playing.)
My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)
But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn’t thinking after all, rather than concluding that its thinking is simply alien to me.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
So how can we fill out this reasoning?
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
This is still my position.
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I don’t know if you’re missing anything.
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.