I can’t speak for Dan, of course, but for my own part: I think this whole discussion has gotten muddled by failing to distinguish clearly enough between claims about the world and claims about language.
I’m not exactly sure what you or Dan mean by “incorrect word usage” here so I can’t easily answer your first question, but I think the distinction you draw between beliefs and brain-states-that-could-be-beliefs-if-they-had-intentional-content-but-since-they-don’t-aren’t is not an important distinction, and using the label “belief” to describe the former but not the latter is not a lexical choice I endorse.
I think that lexical choice is responsible for you saying things like “Boltzmann brains couldn’t have beliefs about Obama.”
I think Boltzmann brains can enter brain-states which correspond to the brain-states that you would call “beliefs about Obama” were I to enter them, and I consider that correspondence strong enough that I see no justification to not also call the BB’s brain-states “beliefs about Obama.”
As far as I can tell, you and I agree about all of this except for what things in the world the word “belief” properly labels.
Do you feel the same way about the word “evidence”? Do you feel comfortable saying that an observer can have evidence regarding the state of some external system even if its brain state is not appropriately causally entangled with that system?
I obviously agree with you that how we use “belief” and “evidence” is a lexical choice. But I think it is a lexical choice with important consequences. Using these words in an internalist manner generally indicates (perhaps even encourages) a failure to recognize the importance of distinguishing between syntax and semantics, a failure I think has been responsible for a lot of confused philosophical thinking. But this is a subject for another post.
This gets difficult, because there’s a whole set of related terms I suspect we aren’t quite using the same way, so there’s a lot of underbrush that needs to get cleared to make clear communication possible.
When I’m trying to be precise, I talk about experiences providing evidence which constrains expectations of future experiences. That said, in practice I do also treat clusters of experience that demonstrate persistent patterns of correlation as evidence of the state of external systems, though I mostly think of that sort of talk as kinda sloppy shorthand for an otherwise too-tedious-to-talk-about set of predicted experiences.
So I feel reasonably comfortable saying that an experience E1 can serve as evidence of an external system S1. Even if I don’t actually believe that S1 exists, I’m still reasonably comfortable saying that E1 is evidence of S1. (E.g., being told that Santa Claus exists is evidence of the existence of Santa Claus, even if it turns out everyone is lying.)
If I have a whole cluster of experiences E1...En, all of which reinforce one another and reinforce my inference of S1, and I don’t have any experiences which serve as evidence that S1 doesn’t exist, I start to have compelling evidence of S1 and my confidence in S1 increases. All of this can occur even if it turns out that S1 doesn’t actually exist. And, of course, some other system S2 can exist without my having any inkling of it. This is all fairly unproblematic.
So, moving on to the condition you’re describing, where E1 causes me to infer the existence of S1, and S1 actually does exist, but S1 is not causally entangled with E1. I find it simpler to think about a similar condition where there exist two external systems, S1 and S2, such that S2 causes E1 and on the basis of E1 I infer the existence of S1, while remaining ignorant of S2. For example, I believe Alice is my birth mother, but in fact Alice (S1) and my birth mother (S2) are separate people. My birth mother sends me an anonymous email (E1) saying “I am your birth mother, and I have cancer.” I infer that Alice has cancer. It turns out that Alice does have cancer, but that this had no causal relationship with the email being sent.
I am comfortable in such an arrangement saying that E1 is evidence that S1 has cancer, even though E1 is not causally entangled with S1′s cancer.
Further, when discussing such an arrangement, I can say that the brain-states caused by E1 are about S1 or about S2 or about both or neither, and it’s not at all clear to me what if anything depends on which of those lexical choices I make. Mostly, I think asking what E1 is really “about” is a wrong question; if it is really about anything it’s about the entire conjoined state of the universe, including both S1 and S2 and everything else, but really who cares?
And if instead there is no S2, and E1 just spontaneously comes into existence, the situation is basically the same as the above, it’s just harder for me to come up with plausible examples.
Perhaps it would help to introduce a distinction here. Let’s distinguish internal evidence and external evidence. P1 counts as internal evidence for P2 if it is procedurally rational for me to alter my credence in P2 once I come to accept P1, given my background knowledge. P1 is external evidence for P2 if the truth of P1 genuinely counterfactually depends on the truth of P2. That is, P1 would be false (or less frequently true, if we’re dealing with statistical claims) if P2 were false. A proposition can be internal evidence without being external evidence. In your anonymous letter example, the letter is internal evidence but not external evidence.
Which conception of evidence is the right one to use will probably depend on context. When we are attempting to describe an individual’s epistemic status—the amount of reliable information they possess about the world—then it seems that external evidence is the relevant variety of evidence to consider. And if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations. Going back to an early example of Eliezer’s:
I’m going to close with the thought experiment that initially convinced me of the falsity of the Modesty Argument. In the beginning it seemed to me reasonable that if feelings of 99% certainty were associated with a 70% frequency of true statements, on average across the global population, then the state of 99% certainty was like a “pointer” to 70% probability. But at one point I thought: “What should an (AI) superintelligence say in the same situation? Should it treat its 99% probability estimates as 70% probability estimates because so many human beings make the same mistake?” In particular, it occurred to me that, on the day the first true superintelligence was born, it would be undeniably true that—across the whole of Earth’s history—the enormously vast majority of entities who had believed themselves superintelligent would be wrong. The majority of the referents of the pointer “I am a superintelligence” would be schizophrenics who believed they were God.
A superintelligence doesn’t just believe the bald statement that it is a superintelligence—it presumably possesses a very detailed, very accurate self-model of its own cognitive systems, tracks in detail its own calibration, and so on. But if you tell this to a mental patient, the mental patient can immediately respond: “Ah, but I too possess a very detailed, very accurate self-model!” The mental patient may even come to sincerely believe this, in the moment of the reply. Does that mean the superintelligence should wonder if it is a mental patient? This is the opposite extreme of Russell Wallace asking if a rock could have been you, since it doesn’t know if it’s you or the rock.
If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases? If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes. But I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients, a difference attributable to differences in external evidence.
I accept your working definitions for “internal evidence” and “external evidence.”
When we are attempting to describe an individual’s epistemic status—the amount of reliable information they possess about the world—then it seems that external evidence is the relevant variety of evidence to consider
I want to be a little careful about the words “epistemic status” and “reliable information,” because a lot of confusion can be introduced through the use of terms that abstract.
I remember reading once that courtship behavior in robins is triggered by the visual stimulus of a patch of red taller than it is wide. I have no idea if this is actually true, but suppose it is. The idea was that the ancestral robin environment didn’t contain other stimuli like that other than female robins in estrus, so it was a reliable piece of evidence to use at the time. Now, of course, there are lots of visual stimuli in that category, so you get robins initiating courtship displays at red socks on clotheslines and at Coke cans.
So, OK. Given that, and using your terms, and assuming it makes any sense to describe what a robin does here as updating on evidence at all, then a vertical red swatch is always internal evidence of a fertile female, and it was external evidence a million years ago (when it “genuinely” counterfactually depended on the presence of such a female) but it is not now. If we put some robins in an environment from which we eliminate all other red things, it would be external evidence again. (Yes?)
If what I am interested in is whether a given robin is correct about whether it’s in the presence of a fertile female, external evidence is the relevant variety of information to consider.
If what I am interested in is what conclusions the robin will actually reach about whether it’s in the presence of a fertile female, internal evidence is the relevant variety of information to consider.
If that is consistent with your claim about the robin’s epistemic status and about the amount of reliable information the robin possesses about the world, then great, I’m with you so far. (If not, this is perhaps a good place to back up and see where we diverged.)
if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations.
Sure, when available external evidence is particularly relevant to those anthropic explanations.
If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases?
So A and B both believe they’re superintelligences. As it happens, A is in fact a SI, and B is in fact a mental patient. And the question is, should A consider itself in the same reference class as B. Yes?
...I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients,
Absolutely agreed. I don’t endorse any decision theory that results in A concluding that it’s more likely to be a mental patient than a SI in a typical situation like this, and this is precisely because of the nature of the information available to A in such a situation.
If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes.
Wait, what?
Why in the world would A and B have similar internal evidence?
I mean, in any normal environment, if A is a superintelligence and B is a mental patient, I would expect A to have loads of information on the basis of which it is procedurally rational for A to conclude that A is in a different reference class than B. Which is internal evidence, on your account. No?
But, OK. If I assume that A and B do have similar internal evidence… huh. Well, that implicitly assumes that A is in a pathologically twisted epistemic environment. I have trouble imagining such an environment, but the world is more complex than I can imagine. So, OK, sure, I can assume such an environment, in a suitably hand-waving sort of way.
And sure, I agree with you: in such an environment, A should consider itself in the same reference class as B. A is mistaken, of course, which is no surprise given that it’s in such an epistemically tainted environment.
Now, I suppose one might say something like “Sure, A is justified in doing so, but A should not do so, because A should not believe falsehoods.” Which would reveal a disconnect relating to the word “should,” in addition to everything else. (When I say that A should believe falsehoods in this situation, I mean I endorse the decision procedure that leads to doing so, not that I endorse the result.)
But we at least ought to agree, given your word usage, that it is procedurally rational for A to conclude that it’s in the same reference class as B in such a tainted environment, even though that isn’t true. Yes?
I can’t speak for Dan, of course, but for my own part: I think this whole discussion has gotten muddled by failing to distinguish clearly enough between claims about the world and claims about language.
I’m not exactly sure what you or Dan mean by “incorrect word usage” here so I can’t easily answer your first question, but I think the distinction you draw between beliefs and brain-states-that-could-be-beliefs-if-they-had-intentional-content-but-since-they-don’t-aren’t is not an important distinction, and using the label “belief” to describe the former but not the latter is not a lexical choice I endorse.
I think that lexical choice is responsible for you saying things like “Boltzmann brains couldn’t have beliefs about Obama.”
I think Boltzmann brains can enter brain-states which correspond to the brain-states that you would call “beliefs about Obama” were I to enter them, and I consider that correspondence strong enough that I see no justification to not also call the BB’s brain-states “beliefs about Obama.”
As far as I can tell, you and I agree about all of this except for what things in the world the word “belief” properly labels.
Do you feel the same way about the word “evidence”? Do you feel comfortable saying that an observer can have evidence regarding the state of some external system even if its brain state is not appropriately causally entangled with that system?
I obviously agree with you that how we use “belief” and “evidence” is a lexical choice. But I think it is a lexical choice with important consequences. Using these words in an internalist manner generally indicates (perhaps even encourages) a failure to recognize the importance of distinguishing between syntax and semantics, a failure I think has been responsible for a lot of confused philosophical thinking. But this is a subject for another post.
This gets difficult, because there’s a whole set of related terms I suspect we aren’t quite using the same way, so there’s a lot of underbrush that needs to get cleared to make clear communication possible.
When I’m trying to be precise, I talk about experiences providing evidence which constrains expectations of future experiences. That said, in practice I do also treat clusters of experience that demonstrate persistent patterns of correlation as evidence of the state of external systems, though I mostly think of that sort of talk as kinda sloppy shorthand for an otherwise too-tedious-to-talk-about set of predicted experiences.
So I feel reasonably comfortable saying that an experience E1 can serve as evidence of an external system S1. Even if I don’t actually believe that S1 exists, I’m still reasonably comfortable saying that E1 is evidence of S1. (E.g., being told that Santa Claus exists is evidence of the existence of Santa Claus, even if it turns out everyone is lying.)
If I have a whole cluster of experiences E1...En, all of which reinforce one another and reinforce my inference of S1, and I don’t have any experiences which serve as evidence that S1 doesn’t exist, I start to have compelling evidence of S1 and my confidence in S1 increases. All of this can occur even if it turns out that S1 doesn’t actually exist. And, of course, some other system S2 can exist without my having any inkling of it. This is all fairly unproblematic.
So, moving on to the condition you’re describing, where E1 causes me to infer the existence of S1, and S1 actually does exist, but S1 is not causally entangled with E1. I find it simpler to think about a similar condition where there exist two external systems, S1 and S2, such that S2 causes E1 and on the basis of E1 I infer the existence of S1, while remaining ignorant of S2. For example, I believe Alice is my birth mother, but in fact Alice (S1) and my birth mother (S2) are separate people. My birth mother sends me an anonymous email (E1) saying “I am your birth mother, and I have cancer.” I infer that Alice has cancer. It turns out that Alice does have cancer, but that this had no causal relationship with the email being sent.
I am comfortable in such an arrangement saying that E1 is evidence that S1 has cancer, even though E1 is not causally entangled with S1′s cancer.
Further, when discussing such an arrangement, I can say that the brain-states caused by E1 are about S1 or about S2 or about both or neither, and it’s not at all clear to me what if anything depends on which of those lexical choices I make. Mostly, I think asking what E1 is really “about” is a wrong question; if it is really about anything it’s about the entire conjoined state of the universe, including both S1 and S2 and everything else, but really who cares?
And if instead there is no S2, and E1 just spontaneously comes into existence, the situation is basically the same as the above, it’s just harder for me to come up with plausible examples.
Perhaps it would help to introduce a distinction here. Let’s distinguish internal evidence and external evidence. P1 counts as internal evidence for P2 if it is procedurally rational for me to alter my credence in P2 once I come to accept P1, given my background knowledge. P1 is external evidence for P2 if the truth of P1 genuinely counterfactually depends on the truth of P2. That is, P1 would be false (or less frequently true, if we’re dealing with statistical claims) if P2 were false. A proposition can be internal evidence without being external evidence. In your anonymous letter example, the letter is internal evidence but not external evidence.
Which conception of evidence is the right one to use will probably depend on context. When we are attempting to describe an individual’s epistemic status—the amount of reliable information they possess about the world—then it seems that external evidence is the relevant variety of evidence to consider. And if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations. Going back to an early example of Eliezer’s:
If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases? If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes. But I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients, a difference attributable to differences in external evidence.
I accept your working definitions for “internal evidence” and “external evidence.”
I want to be a little careful about the words “epistemic status” and “reliable information,” because a lot of confusion can be introduced through the use of terms that abstract.
I remember reading once that courtship behavior in robins is triggered by the visual stimulus of a patch of red taller than it is wide. I have no idea if this is actually true, but suppose it is. The idea was that the ancestral robin environment didn’t contain other stimuli like that other than female robins in estrus, so it was a reliable piece of evidence to use at the time. Now, of course, there are lots of visual stimuli in that category, so you get robins initiating courtship displays at red socks on clotheslines and at Coke cans.
So, OK. Given that, and using your terms, and assuming it makes any sense to describe what a robin does here as updating on evidence at all, then a vertical red swatch is always internal evidence of a fertile female, and it was external evidence a million years ago (when it “genuinely” counterfactually depended on the presence of such a female) but it is not now. If we put some robins in an environment from which we eliminate all other red things, it would be external evidence again. (Yes?)
If what I am interested in is whether a given robin is correct about whether it’s in the presence of a fertile female, external evidence is the relevant variety of information to consider.
If what I am interested in is what conclusions the robin will actually reach about whether it’s in the presence of a fertile female, internal evidence is the relevant variety of information to consider.
If that is consistent with your claim about the robin’s epistemic status and about the amount of reliable information the robin possesses about the world, then great, I’m with you so far. (If not, this is perhaps a good place to back up and see where we diverged.)
Sure, when available external evidence is particularly relevant to those anthropic explanations.
So A and B both believe they’re superintelligences. As it happens, A is in fact a SI, and B is in fact a mental patient. And the question is, should A consider itself in the same reference class as B. Yes?
Absolutely agreed. I don’t endorse any decision theory that results in A concluding that it’s more likely to be a mental patient than a SI in a typical situation like this, and this is precisely because of the nature of the information available to A in such a situation.
Wait, what?
Why in the world would A and B have similar internal evidence?
I mean, in any normal environment, if A is a superintelligence and B is a mental patient, I would expect A to have loads of information on the basis of which it is procedurally rational for A to conclude that A is in a different reference class than B. Which is internal evidence, on your account. No?
But, OK. If I assume that A and B do have similar internal evidence… huh. Well, that implicitly assumes that A is in a pathologically twisted epistemic environment. I have trouble imagining such an environment, but the world is more complex than I can imagine. So, OK, sure, I can assume such an environment, in a suitably hand-waving sort of way.
And sure, I agree with you: in such an environment, A should consider itself in the same reference class as B. A is mistaken, of course, which is no surprise given that it’s in such an epistemically tainted environment.
Now, I suppose one might say something like “Sure, A is justified in doing so, but A should not do so, because A should not believe falsehoods.” Which would reveal a disconnect relating to the word “should,” in addition to everything else. (When I say that A should believe falsehoods in this situation, I mean I endorse the decision procedure that leads to doing so, not that I endorse the result.)
But we at least ought to agree, given your word usage, that it is procedurally rational for A to conclude that it’s in the same reference class as B in such a tainted environment, even though that isn’t true. Yes?