I don’t see how what you’ve just written justifies the assumption “I could have been born in distant past or in far future” which is the basis for the Doomsday Inference.
In a world where “I” am just a result of matterial causes and effects, I couldn’t have existed before the causes for my existence existed, and they couldn’t have existed before their causes existed and so on. Therefore the idea that I could’ve existed at some other time dosen’t makes sense.
On the other hand, if the body is a result of specific causes and effects, but “I” am not inseparable with this body “I” am a specific soul that could’ve been instantiated in any body, then the idea that I could’ve been born at some other moment makes sense.
In a world with material causes and effects, when you toss a coin, you are not tossing the platonic ideal of randomness. If you observe coin on table heads up and know how you are going to take it and how you are going to toss it you basically have all information necessary to predict which side it lands. If you are capable to avoid frequentist nonsense “well, probability of coin landing is either 1 or 0 and I can’t do anything about this” by saying “let’s pretend that I don’t have all this information and say that coin lands heads with probability 0.5″, you are also capable to say “let’s pretend that I have zero information about myself except my birth rank” and make proper update on this.
I agree that if you update on full embedded information you are going to get interesting anthropic results, but it’s another problem.
If you observe coin on table heads up and know how you are going to take it and how you are going to toss it you basically have all information necessary to predict which side it lands.
Sure. If you knew all the relevant information for the coin toss, you could predict the outcome. But as you do not know it, you can treat the situation as an iteration of probability experiment with two possible outcomes.
“let’s pretend that I don’t have all this information and say that coin lands heads with probability 0.5”
You do not pretend not to have all this information. You actually do not know it! When you reason based on the information available—you get correct application of probability theory, the type of reasoning that systematically produces correct map-territory relations. When you pretend that you do not know something that you actually know—you systematically get wrong results.
“let’s pretend that I have zero information about myself except my birth rank”
And this is exactly what you propose here. To reason based on less information than we actually have. We can do it, of course, but then we shouldn’t be surprised that the results are crazy and not correlated with reality which we wanted to reason about in the first place.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data. You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
When you pretend that you do not know something that you actually know—you systematically get wrong results.
Eh, no? Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data.
Then AIXI has the relevant information, while I do not.
You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
A probabilistic model describes knowledge state of an observer and naturally changes when the knowledge state of the observer changes. My ability or inability to extract some information obviously affects which model is appropriate for the problem.
Suppose a coin is tossed and then the outcome is written in Japanese on a piece of paper and this piece of paper is shown to you. Whether or not your credence in the state of the coin changes from equiprobable prior depends on whether you know Japanese or not.
Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
Of course you can. But this way of thinking would be sub-optimal in the situation where you actually has extra information.
I don’t see how what you’ve just written justifies the assumption “I could have been born in distant past or in far future” which is the basis for the Doomsday Inference.
In a world where “I” am just a result of matterial causes and effects, I couldn’t have existed before the causes for my existence existed, and they couldn’t have existed before their causes existed and so on. Therefore the idea that I could’ve existed at some other time dosen’t makes sense.
On the other hand, if the body is a result of specific causes and effects, but “I” am not inseparable with this body “I” am a specific soul that could’ve been instantiated in any body, then the idea that I could’ve been born at some other moment makes sense.
In a world with material causes and effects, when you toss a coin, you are not tossing the platonic ideal of randomness. If you observe coin on table heads up and know how you are going to take it and how you are going to toss it you basically have all information necessary to predict which side it lands. If you are capable to avoid frequentist nonsense “well, probability of coin landing is either 1 or 0 and I can’t do anything about this” by saying “let’s pretend that I don’t have all this information and say that coin lands heads with probability 0.5″, you are also capable to say “let’s pretend that I have zero information about myself except my birth rank” and make proper update on this.
I agree that if you update on full embedded information you are going to get interesting anthropic results, but it’s another problem.
Sure. If you knew all the relevant information for the coin toss, you could predict the outcome. But as you do not know it, you can treat the situation as an iteration of probability experiment with two possible outcomes.
You do not pretend not to have all this information. You actually do not know it! When you reason based on the information available—you get correct application of probability theory, the type of reasoning that systematically produces correct map-territory relations. When you pretend that you do not know something that you actually know—you systematically get wrong results.
And this is exactly what you propose here. To reason based on less information than we actually have. We can do it, of course, but then we shouldn’t be surprised that the results are crazy and not correlated with reality which we wanted to reason about in the first place.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data. You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
Eh, no? Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
Then AIXI has the relevant information, while I do not.
A probabilistic model describes knowledge state of an observer and naturally changes when the knowledge state of the observer changes. My ability or inability to extract some information obviously affects which model is appropriate for the problem.
Suppose a coin is tossed and then the outcome is written in Japanese on a piece of paper and this piece of paper is shown to you. Whether or not your credence in the state of the coin changes from equiprobable prior depends on whether you know Japanese or not.
Of course you can. But this way of thinking would be sub-optimal in the situation where you actually has extra information.