I actually think a bigger weakness in your argument is here:
I believe that I believe that I believe that I exist.
And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer’s (fatuous) requirements for assigning a certain level of confidence to a proposition.
That can’t actually be infinite. If nothing else, your brain can not possibly be able to store an infinite regression of beliefs at once, so at some point, your belief in belief must run out of steps.
Except by their nature, if you’re not storing them, then the next one is not true.
Let me put it this way.
Step 1: You have a thought that X is true. (Let’s call this 1 bit of information.)
Step 2: You notice yourself thinking step 1. Now you say “I appear to believe that X is true.” (Now this is 2 bits of information; x and belief in x”)
Step 3: You notice yourself thinking step 2. Now you say “I appear to believe that I believe that X is true.” (3 bits of information, x, belief in x, and belief in belief in x.)
If at any point you stop storing one of those steps, the next step becomes untrue; if you are not storing, say, step 11 in your head right now (belief in belief in belief....) then step 12 would be false, because you don’t actually believe step 11. After all, “belief” is fundamentally a question of your state of mind, and if you don’t have state x in your mind, if you’ve never even explicitly considered stage x, it can’t really be a belief, right?
I disagree, but in any case you are not (nor is anyone else) an agent with a consistent set of beliefs, so it doesn’t matter if that is true or not.
The real issue here is that there is clearly some possibility that you are mistaken enough about the nature of belief that it is impossible for you to have a belief that consists of a million “I believe that I believe… that I exist.” And if that is the case, and you cannot have that belief, then you mistakenly assigned a probability of 1 to a false statement (since the statement that you believe that string would be false.). Which explains why you should not assign a probability of 1 in the first place, since this is supposed to never happen.
(Also, “I believe that I believe X” cannot be logically deduced from “I believe X” in any case.)
To deduce something you need to use two premises. Nothing follows from “I believe X” without something additional. The other premise would have to be, “In every case when someone believes X, they also believe that they believe X.” But that is not obviously true.
Most people would call that begging the question. I will refrain from that particular accusation since ordinarily people just mean if you agree with the premises of course you would agree with the conclusion. But there is something to the point they would be making: if you are sure of that in general, that is already claiming more than the claim that you believe that you believe that you exist. In other words, it can be “logically deduced” only by assuming even more stuff.
Also, in order to have a probability of 1 for the conclusion, you would need a probability of 1 for that claim, not just reasonable confidence.
Even if it was, I don’t think you can say you have a belief if you haven’t actually deduced it yet. Even taking something simple like math, you might belief theorem A, theorem B, and theorem C, and it might be possible to deduce theorem D from those three theorems, but I don’t think it’s accurate to say “you believe D” until you’ve actually figured out that it logically follows from A, B, and C.
If you’ve never even thought of something I don’t think you can say that you “believe” it..
Fair.
I actually think a bigger weakness in your argument is here:
That can’t actually be infinite. If nothing else, your brain can not possibly be able to store an infinite regression of beliefs at once, so at some point, your belief in belief must run out of steps.
I do not need to actually store those beliefs—it is only necessary to be able to state them—and I wrote a program that outputs those beliefs.
Except by their nature, if you’re not storing them, then the next one is not true.
Let me put it this way.
Step 1: You have a thought that X is true. (Let’s call this 1 bit of information.)
Step 2: You notice yourself thinking step 1. Now you say “I appear to believe that X is true.” (Now this is 2 bits of information; x and belief in x”)
Step 3: You notice yourself thinking step 2. Now you say “I appear to believe that I believe that X is true.” (3 bits of information, x, belief in x, and belief in belief in x.)
If at any point you stop storing one of those steps, the next step becomes untrue; if you are not storing, say, step 11 in your head right now (belief in belief in belief....) then step 12 would be false, because you don’t actually believe step 11. After all, “belief” is fundamentally a question of your state of mind, and if you don’t have state x in your mind, if you’ve never even explicitly considered stage x, it can’t really be a belief, right?
I see.
I thought that you don’t actually have to store those beliefs in your head.
My idea was:
Do you disagree?
I disagree, but in any case you are not (nor is anyone else) an agent with a consistent set of beliefs, so it doesn’t matter if that is true or not.
The real issue here is that there is clearly some possibility that you are mistaken enough about the nature of belief that it is impossible for you to have a belief that consists of a million “I believe that I believe… that I exist.” And if that is the case, and you cannot have that belief, then you mistakenly assigned a probability of 1 to a false statement (since the statement that you believe that string would be false.). Which explains why you should not assign a probability of 1 in the first place, since this is supposed to never happen.
(Also, “I believe that I believe X” cannot be logically deduced from “I believe X” in any case.)
To deduce something you need to use two premises. Nothing follows from “I believe X” without something additional. The other premise would have to be, “In every case when someone believes X, they also believe that they believe X.” But that is not obviously true.
“In the case where I (Dragon God) believes X, they believe they believe X”.
I am reasonably confident that is true.
Most people would call that begging the question. I will refrain from that particular accusation since ordinarily people just mean if you agree with the premises of course you would agree with the conclusion. But there is something to the point they would be making: if you are sure of that in general, that is already claiming more than the claim that you believe that you believe that you exist. In other words, it can be “logically deduced” only by assuming even more stuff.
Also, in order to have a probability of 1 for the conclusion, you would need a probability of 1 for that claim, not just reasonable confidence.
I don’t think that’s actually true.
Even if it was, I don’t think you can say you have a belief if you haven’t actually deduced it yet. Even taking something simple like math, you might belief theorem A, theorem B, and theorem C, and it might be possible to deduce theorem D from those three theorems, but I don’t think it’s accurate to say “you believe D” until you’ve actually figured out that it logically follows from A, B, and C.
If you’ve never even thought of something I don’t think you can say that you “believe” it..