The existence of other signals your brain simply doesn’t process doesn’t shift your prior at all?
That doesn’t seem strictly relevant. Other signals might lead me to believe that there are thoughts I don’t think (but I accepted that already), not thoughts I can’t think. How could I recognize such a thing as a thought? After all, while every thought is a brain signal, not every brain signal is a thought: animals have lots of brain signals, but no thoughts.
Well, for example I don’t think very much about soccer. There are thoughts about who the best soccer team is that I simply don’t ever think. But I can think them.
Another case: In two different senses of ‘can’, I can and can’t understand Spanish. I can’t understand it at the moment, but nevertheless Spanish sentences are in principle translatable into sentences I can understand. I also can’t read Aztec hieroglyphs, and here the problem is more serious: no one knows how to read them. But nevertheless, insofar as we assume they are a form of language, we assume that we could translate them given the proper resources. To see something as translatable just is to see it as a language, and to see something as a language is to see it as translatable. Anything which was is in principle untranslatable just isn’t recognizable as a language.
I think the point is analogous (and that’s no accident) with thoughts. Any thought that I couldn’t think by any means is something I cannot by any means recognize as a thought in the first place. All this is just a way of saying that the belief that there are thoughts you cannot think is one of those beliefs that could never modify your anticipations. That should be enough to discount it as a serious consideration.
And yet, if I see two nonhuman life forms A1 and A2, both of which are performing something I classify as the same task but doing it differently, and A1 and A2 interact, after which they perform the task the same way, I would likely infer that thoughts had been exchanged between them, but I wouldn’t be confident that the thoughts which had been exchanged were thoughts that could be translated to a form that I could understand.
I would likely infer that thoughts had been exchanged between them, but I wouldn’t be confident that the thoughts which had been exchanged were thoughts that could be translated to a form that I could understand.
Alternative explanations include:
They exchanged genetic material, like bacteria, or outright code, like computer programs; which made them behave more similarly.
They are programs, one attacked the other, killed it and replaced its computational slot with a copy of itself.
A1 gave A2 a copy of its black-box decision maker which both now use to determine their behavior in this situation. However, neither of them understands the black box’s decision algorithm on the level of their own conscious thoughts; and the black box itself is not sentient or alive and has no thoughts.
One of them observed the other was more efficient and is now emulating its behavior, but they didn’t talk about it (“exchange thoughts”), just looked at one another.
These are, of course, not exhaustive.
You could call some these cases a kind of thought. Maybe to self-modifying programs, a blackbox executable algorithm counts as a thought; or maybe to beings who use the same information storage for genes and minds, lateral gene transfer counts as a thought.
But this is really just a matter of defining what the word “thought” may refer to. I can define it to include executable undocumented Turing Machines, which I don’t think humans like us can “think”. Or you could define it as something that, after careful argument, reduces to “whatever humans can think and no more”.
Sure. Leaving aside what we properly attach the label “thought” to, the thing I’m talking about in this context is roughly speaking the executed computations that motivate behavior. In that sense I would accept many of these options as examples of the thing I was talking about, although option 2 in particular is primarily something else and thus somewhat misleading to talk about that way.
I think you’re accepting and then withdrawing a premise here: you’ve identified them as interacting, and you’ve identified their interaction as being about the task at hand, and the ways of doing it, and the relative advantages of these ways. You’ve already done a lot of translation right there. So the set up of your problem assumes not only that you can translate their language, but that you in some part already have. All that’s left, translation wise, is a question of precision.
Sure, to some level of precision, I agree that I can think any thought that any other cognitive system, however alien, can think. There might be a mind so alien that the closest analogue to its thought process while contemplating some event that I can fathom is “Look at that, it’s really interesting in some way,” but I’ll accept that this in some part a translation and “all that’s left” is a question of precision.
But if you mean to suggest by that that what’s left is somehow negligible, I strenuously disagree. Precision matters. If my dog and I are both contemplating a ball, and I am calculating the ratio between its volume and surface, and my dog is wondering whether I’ll throw it, we are on some level thinking the same thought (“Oh, look, a ball, it’s interesting in some way”) but to say that my dog therefore can understand what I’m thinking is so misleading as to be simply false.
I consider it possible for cognitive systems to exist that have the same relationship to my mind in some event that my mind has to my dog’s mind in that example.
Well, I don’t think I even implied that the dog could understand what you’re thinking. I don’t think dogs can think at all. What I’m claiming is that for anything that can think (and thus entertain the idea of thoughts that cannot be thought), there are no thoughts that cannot be thought. The difference between you and your dog isn’t just one of raw processing power. It’s easy to imagine a vastly more powerful processor than a human brain that is nevertheless incapable of thought (I think Yud.’s suggestion for an FAI is such a being, given that he’s explicit that it would not rise to the level of being a mechanical person).
Once we agree that it’s a point about precision, I would just say that this ground can always in principle be covered. Suppose the translation has gotten started, such that there is some set of thoughts at some level of precision that is translatable, call it A, and the terra incognito that remains, call it B. Given that the cognitive system you’re trying to translate can itself translate between A and B (the aliens understand themselves perfectly), there should be nothing barring you from doing so as well.
You might need extremely complex formulations of the material in A to capture anything in B, but this is allowed: we need some complex sentence to capture what the Germans mean by ‘schadenfreude’, but it would be wrong to think that because we don’t have a single term which corresponds exactly, that we cannot translate or understand the term to just the same precision the Germans do.
I accept that you don’t consider dogs to have cognitive systems capable of having thoughts. I disagree. I suspect we don’t disagree on the cognitive capabilities of dogs, but rather on what the label “thought” properly refers to.
Perhaps we would do better to avoid the word “thought” altogether in this discussion in order to sidestep that communications failure. That said, I’m not exactly sure how to do that without getting really clunky, really fast. I’ll give it a shot, though.
I certainly agree with you that if cognitive system B (for example, the mind of a Geman speaker) has a simple lexical item Lb (for example, the word “schadenfreude”) , ...and Lb is related to some cognitive state Slb (for example, the thought /schadenfreude/) such that Slb = M(Lb) (which we ordinarily colloquially express by saying that a word means some specific thought), ...and cognitive system A (for example, the mind of an English speaker) lacks a simple lexical item La such that Slb=M(La) (for example, the state we’d ordinarily express by saying that English doesn’t have a word for “schadenfreude”)... that we CANNOT conclude from this that A can’t enter Slb, nor that there exists no Sla such that A can enter Sla and the difference between Sla and Slb is < N, where N is the threshold below which we’d be comfortable saying that Sla and Slb are “the same thought” despite incidental differences which may exist.
So far, so good, I think. This is essentially the same claim you made above about the fact that there is no English word analogous to “schadenfreude” not preventing an English speaker from thinking the thought /schadenfreude/.
In those terms, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter Sa. Further, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter any state Sb such that the difference between Sa and Sb is < N.
Do you disagree with that? Or do you simply assert that if so, Sa and Sb aren’t thoughts? Or something else?
I agree that this is an issue of what ‘thoughts’ are, though I’m not sure it’s productive to side step the term, since if there’s an interesting point to be found in the OP, it’s one which involves claims about what a thought is.
In those terms, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter Sa. Further, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter any state Sb such that the difference between Sa and Sb is < N.
I’d like to disagree with that unqualifiedly, but I don’t think I have the grounds to do so, so my disagreement is a qualified one. I would say that there is no state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can recognise Sa as a cognitive state. So without the last ‘and such that’, this would be a metaphysical claim that all cognitive systems are capable of entertaining all thoughts, barring uninteresting accidental interference (such as a lack of memory capacity, a lack of sufficient lifespan, etc.). I think this is true, but alas.
With the qualification that ‘B would not be able to recognise Sa as a cognitive state’, this is a more modest epistemic claim, one which amounts to the claim that recognising something as a cognitive state is nothing other than entering that state to one degree of precision or another. This effectively marks out my opinion on your second assertion: for any Sa and any Sb, such that the difference between Sa and Sb cannot be < N, A (and/or B) cannot by any means recognise the difference as part of that cognitive state.
All this is a way of saying that you could never have reason to think that there are thoughts that you cannot think. Nothing could give you evidence for this, so it’s effectively a metaphysical speculation. Not only is evidence for such thoughts impossible, but evidence for the possibility of such thoughts is impossible.
I’m not exactly sure what it means to recognize something as a cognitive state, but I do assert that there can exist a state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can believe that A is entering into a particular cognitive state whenever (and only when) A enters Sa. That ought to be equivalent, yes?
This seems to lead me back to your earlier assertion that if there’s some shared “thought” at a very abstract level I and an alien mind can be said to share, then the remaining “terra incognito” between that and sharing the “thought” at a detailed level is necessarily something I can traverse.
I just don’t see any reason to expect that to be true. I am as bewildered by that claim as if you had said to me that if there’s some shared object that I and an alien can both perceive, then I can necessarily share the alien’s perceptions. My response to that claim would be “No, not necessarily; if the alien’s perceptions depend on sense organs or cognitive structures that i don’t possess, for example, then I may not be able to share those perceptions even if I;n perceiving the same object.” Similarly, my response to your claim is “No, not necessarily, if the alien’s ‘thought’ depends on cognitive structures that i don’t possess, for example, then I may not be able to share that ‘thought’.”
You suggest that because the aliens can understand one another’s thoughts, it follows that I can understand the alien’s thoughts, and I don’t see how that’s true either.
So, I dunno… I’m pretty stumped here. From my perspective you’re simply asserting the impossibility, and I cannot see how you arrive at that assertion.
Well, if the terra incogntio has any relationship at all to the thoughts you do understand, such that the terra could be recognized as a part of or related to a cognitive state, then the terra is going to consist in stuff which bears inferential relations to what you do understand. These are relations you can necessarily traverse if the alien can traverse them. Add to that the fact that you’ve already assumed that the aliens largely share your world, that their beliefs are largely true, and that they are largely rational, and it becomes hard to see how you could justify the assertion at the top of your last post.
And that assertion has, thus far, gone undefended.
Well, I justify it by virtue of believing that my brain isn’t some kind of abstract general-purpose thought-having or inferential-relationship-traversing device; it is a specific bit of machinery that evolved to perform specific functions in a particular environment, just like my digestive system, and I find it no more plausible that I can necessarily traverse an inferential relationship that an alien mind can traverse than that I can necessarily extract nutrients from a food source that an alien digestive system can digest.
How do you justify your assertion that I can necessarily traverse an inferential relationship if an alien mind is capable of traversing it?
Well, your brain isn’t that, but its only a necessary but insufficient condition on your having thoughts. Understanding a language is both necessary and sufficient and a language actually is the device you describe. Your competance with your own language ensures the possibility of your traversal in another.
The source of your doubt seemed to be that you didn’t think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?
Ah! OK, your comment now makes sense to me. Thanks. Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine. I’m glad we agree that my brain is not a gpirtd. But you seem to be asserting that English (for example) is a gpirtd. Can you expand on your reasons for believing that? I can see no justification for that claim, either. But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.
So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn’t to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do.
I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we’re foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn’t a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we’re trying to understand. If we don’t assume this, and to whatever extent we don’t assume this, just to that extent we can’t recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don’t have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can’t decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can’t decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at.
This last claim is most persuasively argued, I think, by showing that any example we might construct is going to fall apart. So it’s here that I want to re-ask my question: what would a thought that we cannot think even look like to us? My claim isn’t that there aren’t any such thoughts, only that we could never be given reason for thinking that there are.
ETA: as to the question of brains, here I think there is a sense in which there could be thoughts we cannot think. For example, thoughts which take more than a lifetime to think. But this isn’t an interesting case, and it’s fundamentally remediable. Imagine someone said that there were languages that are impossible for me to understand, and when I pressed him on what he meant, he just pointed out that I do not presently understand chinese, and that he’s about to kill me. He isn’t making an interesting point, or one anyone would object to. If that is all the original quote intended, then seems a bit trivial: the quoted person could have just pointed out that 1000 years ago, no one could have had any thoughts about airplanes.
Re: your ETA… agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa.
But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that.
I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are “fundamentally remediable”. Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED.
I’m enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.
Well, at the risk of repeating myself in turn, I’ll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn’t think those thoughts.
I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree).
I’ve asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly?
If so, I don’t think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.
I agree that if I’m wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds.
I see no reason to believe that, though.
===
Except, you say, for defective cases like sign-language. I have absolutely no idea on what basis you judge sign language defective and English non-defective here, or whether you’re referring to some specific sign language or the whole class of sign languages. However, I agree with you that sign languages are not gpirtds. (I don’t believe English is either.)
Well, at the risk of repeating myself in turn, I’ll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn’t think those thoughts.
Well, I’d like a little more from you: I’d like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn’t go so far as suggesting the latter of the two claims.
So do you think you can come up with such an example? If not, don’t you think that counts powerfully against your reasons for thinking that such a situation is possible?
I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.
This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.)
It’s extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.
From an epistemic position, the proposition P1: “Dave’s mind is capable of thinking the thought that A1 and A2 shared” is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn’t prove I’m incapable of it, it just means that I haven’t yet succeeded.
But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1.
If you’re simply asserting that that prior probability can’t ever reach zero, I agree completely.
If you’re asserting that that prior probability can’t in practice ever reach epsilon, I mostly agree.
If you’re asserting that that prior probability can’t in practice get lower than, say, .01, I disagree.
(ETA: In case this isn’t clear, I mean here to propose “I repeatedly try to understand in detail the thought underlying A1 and A2′s cooperation and I repeatedly fail” as an example of a reason to think that the thought in question is not one I can think.)
From an epistemic position, the proposition P1: “Dave’s mind is capable of thinking the thought that A1 and A2 shared” is experimentally unfalsifiable.
I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A’s were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn’t think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.
So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave’s opinion that the aliens are thinking is irrational, even if it is true.
Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.
suppose Dave were a propositional logic machine, and the A’s were first order logic machines. [..] (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.
I’m not sure how we would determine experimentally that they were true, though. I wouldn’t normally care, but you made such a point a moment ago about the importance of your claim being about what’s knowable rather than about what’s true that I’m not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.
That, again, is not my point.
Then I suppose we can safely ignore it for now.
Dave could never have reasons for thinking that he couldn’t think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do.
As I’ve already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I’m incapable of doing so.
So suppose Dave has understood that the aliens are thinking.
Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?
By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
I’m willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of “world”, “largely”, and “relevant”. Before I lean too heavily on any of that I’d want to clarify those words further, but I’m not sure it actually matters.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action
I don’t agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me “B is true,” I have reason to think B is true but I don’t know the content of B.
then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
The premise is false, but I agree that were it true your conclusion would follow.
I have reason to think B is true but I don’t know the content of B.
This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do.
So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn’t so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you’ve got a lot of beliefs about what B is, without knowing the specifics.
Essentially, your inference that B is true because Sam says that it is, is the belief that though you don’t know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.
In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B.
(ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I’m saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can’t have the assumptions necessary to set up your examples.)
This seems to be a crucial disagreement, so we should settle it first.
All right.
you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs
Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes.
Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don’t interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible… I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I’m willing to go along with it for now.
Agreed so far.
you think Sam makes judgements roughly in the same ways you do.
No, I don’t follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I’m not sure this matters to your argument.
So, you mostly understand the kinds of inferences Sam draws
Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself.
you mostly understand the beliefs that Sam has
Yes, in the same ways.
If you infer from this that B is true because Sam says that it is, you must be assuming that B isn’t so odd belief that Sam has no competence in assessing it.
Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I’m confident it isn’t true.
you’ve got a lot of beliefs about what B is, without knowing the specifics.
Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies.
Essentially, your inference that B is true because Sam says that it is, is the belief that though you don’t know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.
Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report “B1 is true,” the prior probability that I already know B1 is high.
But this is of course in no sense guaranteed. For example, B might be “I’m wearing purple socks,” in response to which Sam checks the color of your socks, and subsequently reports to me that B is true. In this case I don’t in fact know what color socks you are wearing.
In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs).
Again, statistically speaking, sure.
Thinking that B is probably true just is believing you know something about B.
No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
If X smells good, I have reason to believe that X tastes good, because most things that smell good also taste good. But it is quite possible for me to both smell and taste X and conclude “X smells good and tastes bad.” If “thinking that X smells good just is believing that X tastes good” were true, I would at that point also believe “X tastes good and tastes bad,” which is not in fact what happens. Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
Similarly, if Sam reports B as true, I have good reason to think B is probably true, and I also have good reason to think I know something important about the content of B (e.g., that it is or follows from one of my own beliefs), because most things that Sam would report as true I also know something important about the contents of (e.g., ibid). But it’s quite possible for Sam to report B as true without me knowing anything important about the content of B. I similarly conclude that “thinking that B is probably true just is believing [I] know something [important] about B” is false.
In case it matters, not only is it possible for me to believe B is true when I don’t in fact know the content of B (e.g., B is “Abrooks’ socks are purple” and Sam checks your socks and tells me “B is true” when I neither know what B says nor know that Abrooks’ socks are purple), it’s also possible for me to have good reason to believe that I don’t know the content of B in this situation (e.g., if Sam further tells me “Dave, you don’t know the content of B”… which in fact I don’t, and Sam has good reason to believe I don’t.)
No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam’s judgements work is knowing something about this judgement.
None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you “Dave, you don’t know the content of B”, you ought to reply “Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it’s something you would judge to be true on the basis of a shared set of beliefs.”
Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone’s set of beliefs. Even if there’s any distinction here (i.e. if we’re foundationalists of some kind), it still doesn’t follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.
Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
So, I’m not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I’m saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
I hope we can agree that in common usage, it’s unproblematic for me to say that I don’t know what color your socks are. I don’t, in fact, know what color your socks are. I don’t even know that you’re wearing socks.
But, sure, I think it’s more probable that your socks (if you’re wearing them) are white than that they’re purple, and that they probably aren’t transparent, and that they probably aren’t pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.
And, sure, if you’re thinking “my socks are purple” and I’m thinking “Abrooks’ socks probably aren’t transparent,” these kinds of knowledge aren’t wholly unrelated to one another. But that doesn’t mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.
Much as you think I’m drawing arbitrary distinctions, I think you’re eliding over real distinctions.
Okay, so it sounds like we’re agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?
If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.
If we’re on the same page so far, then we’ve agreed that you can’t recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
Yes, my reasons for believing B are, in the very limited sense we’re now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).
Yes, agreed that if I think something is thinking, I know something about the content of its thought.
Further agreed that in the highly extended sense that you’re using “understanding”—the same sense that I can be said to “know” what color socks you’re wearing—I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it’s being a thought.
So, OK… you’ve proven your point.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
Oh, come on, this has been a very interesting discussion. And I don’t take myself to have proven any sort of point. Basically, if we’ve agreed to all of the above, then we still have to address the original point about precision.
Now, I don’t have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let’s assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn’t actually logically alien.
This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you’ve failed, you have taken the first few steps already.
As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don’t know. I think that if we encountered such thought, we would pretty much only have reason to think that it’s not thought.
So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I’ve utterly failed to convince you, after all, I would take that as evidence against my point.
I would summarize your argument as “If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren’t mutually intelligible in the general case, we can’t recognize them as thinking.”
My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it’s likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it’s perfectly safe, but I wouldn’t recommend playing.)
My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)
But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn’t thinking after all, rather than concluding that its thinking is simply alien to me.
I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support.
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role.
This is still my position.
(Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
there is nothing that thinks that cannot use language, and everything that can use language can to that extent think
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
So physical differences can matter, but among healthy brains, they almost always don’t.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch?
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.
Can you rotate four dimensional solids in your head?
Edit: it looks like I’m not the first to suggest this, but I’ll add that since computers are capable not just of representing more than three spacial dimensions, but of tracking objects through them, these are probably “possible thoughts” even if no human can represent them mentally.
Can you rotate four dimensional solids in your head?
Well, suppose I’m colorblind from birth. I can’t visualize green. Is this significantly different from the example of 4d rotations?
If so, how? (ETA: after all, we can do all the math associated with 4d rotations, so we’re not deficient in conceptualizing them, just in imagining them. Arguably, computers can’t visualize them either. They just do the math and move on).
If not, then is this the only kind of thought (i.e. visualizations, etc.) that we can defend as potentially unthinkable by us? If this is the only kind of thought thus defensible, then we’ve rendered the original quote trivial: it infers from the fact that it’s possible to be unable to see a color that it’s possible to be unable to think a thought. But if these kinds of visualizations are the only kinds of thoughts we might not be able to think, then the quote isn’t saying anything.
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?
I’m not a physicist, but I have been taught that beyond the simplest atoms, the calculations become so difficult that we’re unable to determine whether our quantum models actually predict the configurations we observe. In this case, we can’t simply do the math and move on, because the math is too difficult. With our own mental hardware, it appears that we can neither visualize nor predict the behavior of particles on that scale, above a certain level of complexity, but that doesn’t mean that a jupiter brain wouldn’t be able to.
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?
I’m not discounting qualia (that’s it’s own discussion), I’m just saying that if these are the only kinds of thoughts which we can defend as being potentially unthinkable by us, then the original quote is trivial.
So one strategy you might take to defend thoughts we cannot think is this: thinking is or supervenes on a physical process, and thus it necessarily takes time. All human beings have a finite lifespan. Some thought could be formulated such that the act of thinking it with a human brain would take longer than any possible lifespan, or perhaps just an infinite amount of time. Therefore, there are thoughts we cannot think.
I think this suggestion is basically the same as yours: what prevents us from thinking this thought is some limited resources, like memory or lifespan, or something like that. Similarly, I could suggest a language that is in principle untranslatable, just because all well formed sentences and clauses in that language are long enough that we couldn’t remember a whole one.
But it would be important to distinguish, in these cases, between two different kinds of unthinkability or untranslatability. Both the infinite (or just super complex) thoughts and the super long sentences are translatable into a language we can understand, in principle. There’s nothing about those thoughts or sentences, or our thoughts or sentences, that makes them incompatible. The incompatibility arises from a fact about our biology. So in the same line, we could say that some alien species’ language is untranslatable because they speak and write in some medium we don’t have the technology to access. The problem there isn’t with the language or the act of translation.
In sum, I think that this suggestion (and perhaps the original quote) trades on an equivocation between two different kinds of unthinkability. But if the only defensible kind of unthinkability is one on the basis of some accidental limitation of access or resources, then I can’t see what’s interesting about the idea. It’s no more interesting then than the point that I can’t speak Chinese because I haven’t learned it.
That doesn’t seem strictly relevant. Other signals might lead me to believe that there are thoughts I don’t think (but I accepted that already), not thoughts I can’t think. How could I recognize such a thing as a thought? After all, while every thought is a brain signal, not every brain signal is a thought: animals have lots of brain signals, but no thoughts.
What is the difference between a thought you can’t think and one you don’t think?
Well, for example I don’t think very much about soccer. There are thoughts about who the best soccer team is that I simply don’t ever think. But I can think them.
Another case: In two different senses of ‘can’, I can and can’t understand Spanish. I can’t understand it at the moment, but nevertheless Spanish sentences are in principle translatable into sentences I can understand. I also can’t read Aztec hieroglyphs, and here the problem is more serious: no one knows how to read them. But nevertheless, insofar as we assume they are a form of language, we assume that we could translate them given the proper resources. To see something as translatable just is to see it as a language, and to see something as a language is to see it as translatable. Anything which was is in principle untranslatable just isn’t recognizable as a language.
I think the point is analogous (and that’s no accident) with thoughts. Any thought that I couldn’t think by any means is something I cannot by any means recognize as a thought in the first place. All this is just a way of saying that the belief that there are thoughts you cannot think is one of those beliefs that could never modify your anticipations. That should be enough to discount it as a serious consideration.
And yet, if I see two nonhuman life forms A1 and A2, both of which are performing something I classify as the same task but doing it differently, and A1 and A2 interact, after which they perform the task the same way, I would likely infer that thoughts had been exchanged between them, but I wouldn’t be confident that the thoughts which had been exchanged were thoughts that could be translated to a form that I could understand.
Alternative explanations include:
They exchanged genetic material, like bacteria, or outright code, like computer programs; which made them behave more similarly.
They are programs, one attacked the other, killed it and replaced its computational slot with a copy of itself.
A1 gave A2 a copy of its black-box decision maker which both now use to determine their behavior in this situation. However, neither of them understands the black box’s decision algorithm on the level of their own conscious thoughts; and the black box itself is not sentient or alive and has no thoughts.
One of them observed the other was more efficient and is now emulating its behavior, but they didn’t talk about it (“exchange thoughts”), just looked at one another.
These are, of course, not exhaustive.
You could call some these cases a kind of thought. Maybe to self-modifying programs, a blackbox executable algorithm counts as a thought; or maybe to beings who use the same information storage for genes and minds, lateral gene transfer counts as a thought.
But this is really just a matter of defining what the word “thought” may refer to. I can define it to include executable undocumented Turing Machines, which I don’t think humans like us can “think”. Or you could define it as something that, after careful argument, reduces to “whatever humans can think and no more”.
Sure. Leaving aside what we properly attach the label “thought” to, the thing I’m talking about in this context is roughly speaking the executed computations that motivate behavior. In that sense I would accept many of these options as examples of the thing I was talking about, although option 2 in particular is primarily something else and thus somewhat misleading to talk about that way.
I think you’re accepting and then withdrawing a premise here: you’ve identified them as interacting, and you’ve identified their interaction as being about the task at hand, and the ways of doing it, and the relative advantages of these ways. You’ve already done a lot of translation right there. So the set up of your problem assumes not only that you can translate their language, but that you in some part already have. All that’s left, translation wise, is a question of precision.
Sure, to some level of precision, I agree that I can think any thought that any other cognitive system, however alien, can think. There might be a mind so alien that the closest analogue to its thought process while contemplating some event that I can fathom is “Look at that, it’s really interesting in some way,” but I’ll accept that this in some part a translation and “all that’s left” is a question of precision.
But if you mean to suggest by that that what’s left is somehow negligible, I strenuously disagree. Precision matters. If my dog and I are both contemplating a ball, and I am calculating the ratio between its volume and surface, and my dog is wondering whether I’ll throw it, we are on some level thinking the same thought (“Oh, look, a ball, it’s interesting in some way”) but to say that my dog therefore can understand what I’m thinking is so misleading as to be simply false.
I consider it possible for cognitive systems to exist that have the same relationship to my mind in some event that my mind has to my dog’s mind in that example.
Well, I don’t think I even implied that the dog could understand what you’re thinking. I don’t think dogs can think at all. What I’m claiming is that for anything that can think (and thus entertain the idea of thoughts that cannot be thought), there are no thoughts that cannot be thought. The difference between you and your dog isn’t just one of raw processing power. It’s easy to imagine a vastly more powerful processor than a human brain that is nevertheless incapable of thought (I think Yud.’s suggestion for an FAI is such a being, given that he’s explicit that it would not rise to the level of being a mechanical person).
Once we agree that it’s a point about precision, I would just say that this ground can always in principle be covered. Suppose the translation has gotten started, such that there is some set of thoughts at some level of precision that is translatable, call it A, and the terra incognito that remains, call it B. Given that the cognitive system you’re trying to translate can itself translate between A and B (the aliens understand themselves perfectly), there should be nothing barring you from doing so as well.
You might need extremely complex formulations of the material in A to capture anything in B, but this is allowed: we need some complex sentence to capture what the Germans mean by ‘schadenfreude’, but it would be wrong to think that because we don’t have a single term which corresponds exactly, that we cannot translate or understand the term to just the same precision the Germans do.
I accept that you don’t consider dogs to have cognitive systems capable of having thoughts. I disagree. I suspect we don’t disagree on the cognitive capabilities of dogs, but rather on what the label “thought” properly refers to.
Perhaps we would do better to avoid the word “thought” altogether in this discussion in order to sidestep that communications failure. That said, I’m not exactly sure how to do that without getting really clunky, really fast. I’ll give it a shot, though.
I certainly agree with you that if cognitive system B (for example, the mind of a Geman speaker) has a simple lexical item Lb (for example, the word “schadenfreude”) ,
...and Lb is related to some cognitive state Slb (for example, the thought /schadenfreude/) such that Slb = M(Lb) (which we ordinarily colloquially express by saying that a word means some specific thought),
...and cognitive system A (for example, the mind of an English speaker) lacks a simple lexical item La such that Slb=M(La) (for example, the state we’d ordinarily express by saying that English doesn’t have a word for “schadenfreude”)...
that we CANNOT conclude from this that A can’t enter Slb, nor that there exists no Sla such that A can enter Sla and the difference between Sla and Slb is < N, where N is the threshold below which we’d be comfortable saying that Sla and Slb are “the same thought” despite incidental differences which may exist.
So far, so good, I think. This is essentially the same claim you made above about the fact that there is no English word analogous to “schadenfreude” not preventing an English speaker from thinking the thought /schadenfreude/.
In those terms, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter Sa. Further, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter any state Sb such that the difference between Sa and Sb is < N.
Do you disagree with that? Or do you simply assert that if so, Sa and Sb aren’t thoughts? Or something else?
I agree that this is an issue of what ‘thoughts’ are, though I’m not sure it’s productive to side step the term, since if there’s an interesting point to be found in the OP, it’s one which involves claims about what a thought is.
I’d like to disagree with that unqualifiedly, but I don’t think I have the grounds to do so, so my disagreement is a qualified one. I would say that there is no state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can recognise Sa as a cognitive state. So without the last ‘and such that’, this would be a metaphysical claim that all cognitive systems are capable of entertaining all thoughts, barring uninteresting accidental interference (such as a lack of memory capacity, a lack of sufficient lifespan, etc.). I think this is true, but alas.
With the qualification that ‘B would not be able to recognise Sa as a cognitive state’, this is a more modest epistemic claim, one which amounts to the claim that recognising something as a cognitive state is nothing other than entering that state to one degree of precision or another. This effectively marks out my opinion on your second assertion: for any Sa and any Sb, such that the difference between Sa and Sb cannot be < N, A (and/or B) cannot by any means recognise the difference as part of that cognitive state.
All this is a way of saying that you could never have reason to think that there are thoughts that you cannot think. Nothing could give you evidence for this, so it’s effectively a metaphysical speculation. Not only is evidence for such thoughts impossible, but evidence for the possibility of such thoughts is impossible.
I’m not exactly sure what it means to recognize something as a cognitive state, but I do assert that there can exist a state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can believe that A is entering into a particular cognitive state whenever (and only when) A enters Sa. That ought to be equivalent, yes?
This seems to lead me back to your earlier assertion that if there’s some shared “thought” at a very abstract level I and an alien mind can be said to share, then the remaining “terra incognito” between that and sharing the “thought” at a detailed level is necessarily something I can traverse.
I just don’t see any reason to expect that to be true. I am as bewildered by that claim as if you had said to me that if there’s some shared object that I and an alien can both perceive, then I can necessarily share the alien’s perceptions. My response to that claim would be “No, not necessarily; if the alien’s perceptions depend on sense organs or cognitive structures that i don’t possess, for example, then I may not be able to share those perceptions even if I;n perceiving the same object.” Similarly, my response to your claim is “No, not necessarily, if the alien’s ‘thought’ depends on cognitive structures that i don’t possess, for example, then I may not be able to share that ‘thought’.”
You suggest that because the aliens can understand one another’s thoughts, it follows that I can understand the alien’s thoughts, and I don’t see how that’s true either.
So, I dunno… I’m pretty stumped here. From my perspective you’re simply asserting the impossibility, and I cannot see how you arrive at that assertion.
Well, if the terra incogntio has any relationship at all to the thoughts you do understand, such that the terra could be recognized as a part of or related to a cognitive state, then the terra is going to consist in stuff which bears inferential relations to what you do understand. These are relations you can necessarily traverse if the alien can traverse them. Add to that the fact that you’ve already assumed that the aliens largely share your world, that their beliefs are largely true, and that they are largely rational, and it becomes hard to see how you could justify the assertion at the top of your last post.
And that assertion has, thus far, gone undefended.
Well, I justify it by virtue of believing that my brain isn’t some kind of abstract general-purpose thought-having or inferential-relationship-traversing device; it is a specific bit of machinery that evolved to perform specific functions in a particular environment, just like my digestive system, and I find it no more plausible that I can necessarily traverse an inferential relationship that an alien mind can traverse than that I can necessarily extract nutrients from a food source that an alien digestive system can digest.
How do you justify your assertion that I can necessarily traverse an inferential relationship if an alien mind is capable of traversing it?
Well, your brain isn’t that, but its only a necessary but insufficient condition on your having thoughts. Understanding a language is both necessary and sufficient and a language actually is the device you describe. Your competance with your own language ensures the possibility of your traversal in another.
Sorry, I didn’t follow that at all.
The source of your doubt seemed to be that you didn’t think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?
Ah! OK, your comment now makes sense to me. Thanks.
Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine.
I’m glad we agree that my brain is not a gpirtd.
But you seem to be asserting that English (for example) is a gpirtd.
Can you expand on your reasons for believing that? I can see no justification for that claim, either.
But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.
So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn’t to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do.
I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we’re foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn’t a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we’re trying to understand. If we don’t assume this, and to whatever extent we don’t assume this, just to that extent we can’t recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don’t have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can’t decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can’t decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at.
This last claim is most persuasively argued, I think, by showing that any example we might construct is going to fall apart. So it’s here that I want to re-ask my question: what would a thought that we cannot think even look like to us? My claim isn’t that there aren’t any such thoughts, only that we could never be given reason for thinking that there are.
ETA: as to the question of brains, here I think there is a sense in which there could be thoughts we cannot think. For example, thoughts which take more than a lifetime to think. But this isn’t an interesting case, and it’s fundamentally remediable. Imagine someone said that there were languages that are impossible for me to understand, and when I pressed him on what he meant, he just pointed out that I do not presently understand chinese, and that he’s about to kill me. He isn’t making an interesting point, or one anyone would object to. If that is all the original quote intended, then seems a bit trivial: the quoted person could have just pointed out that 1000 years ago, no one could have had any thoughts about airplanes.
Re: your ETA… agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa.
But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that.
I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are “fundamentally remediable”. Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED.
I’m enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.
Well, at the risk of repeating myself in turn, I’ll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn’t think those thoughts.
I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree).
I’ve asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly?
If so, I don’t think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.
I agree that if I’m wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds.
I see no reason to believe that, though.
===
Except, you say, for defective cases like sign-language. I have absolutely no idea on what basis you judge sign language defective and English non-defective here, or whether you’re referring to some specific sign language or the whole class of sign languages. However, I agree with you that sign languages are not gpirtds. (I don’t believe English is either.)
Well, I’d like a little more from you: I’d like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn’t go so far as suggesting the latter of the two claims.
So do you think you can come up with such an example? If not, don’t you think that counts powerfully against your reasons for thinking that such a situation is possible?
This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.)
It’s extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.
From an epistemic position, the proposition P1: “Dave’s mind is capable of thinking the thought that A1 and A2 shared” is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn’t prove I’m incapable of it, it just means that I haven’t yet succeeded.
But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1.
If you’re simply asserting that that prior probability can’t ever reach zero, I agree completely.
If you’re asserting that that prior probability can’t in practice ever reach epsilon, I mostly agree.
If you’re asserting that that prior probability can’t in practice get lower than, say, .01, I disagree.
(ETA: In case this isn’t clear, I mean here to propose “I repeatedly try to understand in detail the thought underlying A1 and A2′s cooperation and I repeatedly fail” as an example of a reason to think that the thought in question is not one I can think.)
I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A’s were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn’t think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.
So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave’s opinion that the aliens are thinking is irrational, even if it is true.
Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.
Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.
I’m not sure how we would determine experimentally that they were true, though. I wouldn’t normally care, but you made such a point a moment ago about the importance of your claim being about what’s knowable rather than about what’s true that I’m not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.
Then I suppose we can safely ignore it for now.
As I’ve already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I’m incapable of doing so.
Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?
I’m willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of “world”, “largely”, and “relevant”. Before I lean too heavily on any of that I’d want to clarify those words further, but I’m not sure it actually matters.
I don’t agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me “B is true,” I have reason to think B is true but I don’t know the content of B.
The premise is false, but I agree that were it true your conclusion would follow.
This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do.
So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn’t so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you’ve got a lot of beliefs about what B is, without knowing the specifics.
Essentially, your inference that B is true because Sam says that it is, is the belief that though you don’t know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.
In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B.
(ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I’m saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can’t have the assumptions necessary to set up your examples.)
All right.
Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes.
Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don’t interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible… I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I’m willing to go along with it for now.
Agreed so far.
No, I don’t follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I’m not sure this matters to your argument.
Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself.
Yes, in the same ways.
Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I’m confident it isn’t true.
Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies.
Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report “B1 is true,” the prior probability that I already know B1 is high.
But this is of course in no sense guaranteed. For example, B might be “I’m wearing purple socks,” in response to which Sam checks the color of your socks, and subsequently reports to me that B is true. In this case I don’t in fact know what color socks you are wearing.
Again, statistically speaking, sure.
No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
If X smells good, I have reason to believe that X tastes good, because most things that smell good also taste good. But it is quite possible for me to both smell and taste X and conclude “X smells good and tastes bad.” If “thinking that X smells good just is believing that X tastes good” were true, I would at that point also believe “X tastes good and tastes bad,” which is not in fact what happens. Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
Similarly, if Sam reports B as true, I have good reason to think B is probably true, and I also have good reason to think I know something important about the content of B (e.g., that it is or follows from one of my own beliefs), because most things that Sam would report as true I also know something important about the contents of (e.g., ibid). But it’s quite possible for Sam to report B as true without me knowing anything important about the content of B. I similarly conclude that “thinking that B is probably true just is believing [I] know something [important] about B” is false.
In case it matters, not only is it possible for me to believe B is true when I don’t in fact know the content of B (e.g., B is “Abrooks’ socks are purple” and Sam checks your socks and tells me “B is true” when I neither know what B says nor know that Abrooks’ socks are purple), it’s also possible for me to have good reason to believe that I don’t know the content of B in this situation (e.g., if Sam further tells me “Dave, you don’t know the content of B”… which in fact I don’t, and Sam has good reason to believe I don’t.)
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam’s judgements work is knowing something about this judgement.
None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you “Dave, you don’t know the content of B”, you ought to reply “Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it’s something you would judge to be true on the basis of a shared set of beliefs.”
Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone’s set of beliefs. Even if there’s any distinction here (i.e. if we’re foundationalists of some kind), it still doesn’t follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.
So, I’m not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I’m saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
I hope we can agree that in common usage, it’s unproblematic for me to say that I don’t know what color your socks are. I don’t, in fact, know what color your socks are. I don’t even know that you’re wearing socks.
But, sure, I think it’s more probable that your socks (if you’re wearing them) are white than that they’re purple, and that they probably aren’t transparent, and that they probably aren’t pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.
And, sure, if you’re thinking “my socks are purple” and I’m thinking “Abrooks’ socks probably aren’t transparent,” these kinds of knowledge aren’t wholly unrelated to one another. But that doesn’t mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.
Much as you think I’m drawing arbitrary distinctions, I think you’re eliding over real distinctions.
Okay, so it sounds like we’re agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?
If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.
If we’re on the same page so far, then we’ve agreed that you can’t recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
Yes, my reasons for believing B are, in the very limited sense we’re now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).
Yes, agreed that if I think something is thinking, I know something about the content of its thought.
Further agreed that in the highly extended sense that you’re using “understanding”—the same sense that I can be said to “know” what color socks you’re wearing—I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it’s being a thought.
So, OK… you’ve proven your point.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
Oh, come on, this has been a very interesting discussion. And I don’t take myself to have proven any sort of point. Basically, if we’ve agreed to all of the above, then we still have to address the original point about precision.
Now, I don’t have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let’s assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn’t actually logically alien.
This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you’ve failed, you have taken the first few steps already.
As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don’t know. I think that if we encountered such thought, we would pretty much only have reason to think that it’s not thought.
So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I’ve utterly failed to convince you, after all, I would take that as evidence against my point.
My position on this hasn’t changed, really.
I would summarize your argument as “If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren’t mutually intelligible in the general case, we can’t recognize them as thinking.”
My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it’s likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it’s perfectly safe, but I wouldn’t recommend playing.)
My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)
But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn’t thinking after all, rather than concluding that its thinking is simply alien to me.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
So how can we fill out this reasoning?
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
This is still my position.
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I don’t know if you’re missing anything.
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.
Can you rotate four dimensional solids in your head?
Edit: it looks like I’m not the first to suggest this, but I’ll add that since computers are capable not just of representing more than three spacial dimensions, but of tracking objects through them, these are probably “possible thoughts” even if no human can represent them mentally.
Well, suppose I’m colorblind from birth. I can’t visualize green. Is this significantly different from the example of 4d rotations?
If so, how? (ETA: after all, we can do all the math associated with 4d rotations, so we’re not deficient in conceptualizing them, just in imagining them. Arguably, computers can’t visualize them either. They just do the math and move on).
If not, then is this the only kind of thought (i.e. visualizations, etc.) that we can defend as potentially unthinkable by us? If this is the only kind of thought thus defensible, then we’ve rendered the original quote trivial: it infers from the fact that it’s possible to be unable to see a color that it’s possible to be unable to think a thought. But if these kinds of visualizations are the only kinds of thoughts we might not be able to think, then the quote isn’t saying anything.
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?
I’m not a physicist, but I have been taught that beyond the simplest atoms, the calculations become so difficult that we’re unable to determine whether our quantum models actually predict the configurations we observe. In this case, we can’t simply do the math and move on, because the math is too difficult. With our own mental hardware, it appears that we can neither visualize nor predict the behavior of particles on that scale, above a certain level of complexity, but that doesn’t mean that a jupiter brain wouldn’t be able to.
I’m not discounting qualia (that’s it’s own discussion), I’m just saying that if these are the only kinds of thoughts which we can defend as being potentially unthinkable by us, then the original quote is trivial.
So one strategy you might take to defend thoughts we cannot think is this: thinking is or supervenes on a physical process, and thus it necessarily takes time. All human beings have a finite lifespan. Some thought could be formulated such that the act of thinking it with a human brain would take longer than any possible lifespan, or perhaps just an infinite amount of time. Therefore, there are thoughts we cannot think.
I think this suggestion is basically the same as yours: what prevents us from thinking this thought is some limited resources, like memory or lifespan, or something like that. Similarly, I could suggest a language that is in principle untranslatable, just because all well formed sentences and clauses in that language are long enough that we couldn’t remember a whole one.
But it would be important to distinguish, in these cases, between two different kinds of unthinkability or untranslatability. Both the infinite (or just super complex) thoughts and the super long sentences are translatable into a language we can understand, in principle. There’s nothing about those thoughts or sentences, or our thoughts or sentences, that makes them incompatible. The incompatibility arises from a fact about our biology. So in the same line, we could say that some alien species’ language is untranslatable because they speak and write in some medium we don’t have the technology to access. The problem there isn’t with the language or the act of translation.
In sum, I think that this suggestion (and perhaps the original quote) trades on an equivocation between two different kinds of unthinkability. But if the only defensible kind of unthinkability is one on the basis of some accidental limitation of access or resources, then I can’t see what’s interesting about the idea. It’s no more interesting then than the point that I can’t speak Chinese because I haven’t learned it.