You don’t need the idea of “souls” to justify anthropics, you can just sample instantiated computations in universe, run on them is_observer, and get answer “yes’ with some probability, which is pretty much well-defined.
I don’t see how what you’ve just written justifies the assumption “I could have been born in distant past or in far future” which is the basis for the Doomsday Inference.
In a world where “I” am just a result of matterial causes and effects, I couldn’t have existed before the causes for my existence existed, and they couldn’t have existed before their causes existed and so on. Therefore the idea that I could’ve existed at some other time dosen’t makes sense.
On the other hand, if the body is a result of specific causes and effects, but “I” am not inseparable with this body “I” am a specific soul that could’ve been instantiated in any body, then the idea that I could’ve been born at some other moment makes sense.
In a world with material causes and effects, when you toss a coin, you are not tossing the platonic ideal of randomness. If you observe coin on table heads up and know how you are going to take it and how you are going to toss it you basically have all information necessary to predict which side it lands. If you are capable to avoid frequentist nonsense “well, probability of coin landing is either 1 or 0 and I can’t do anything about this” by saying “let’s pretend that I don’t have all this information and say that coin lands heads with probability 0.5″, you are also capable to say “let’s pretend that I have zero information about myself except my birth rank” and make proper update on this.
I agree that if you update on full embedded information you are going to get interesting anthropic results, but it’s another problem.
If you observe coin on table heads up and know how you are going to take it and how you are going to toss it you basically have all information necessary to predict which side it lands.
Sure. If you knew all the relevant information for the coin toss, you could predict the outcome. But as you do not know it, you can treat the situation as an iteration of probability experiment with two possible outcomes.
“let’s pretend that I don’t have all this information and say that coin lands heads with probability 0.5”
You do not pretend not to have all this information. You actually do not know it! When you reason based on the information available—you get correct application of probability theory, the type of reasoning that systematically produces correct map-territory relations. When you pretend that you do not know something that you actually know—you systematically get wrong results.
“let’s pretend that I have zero information about myself except my birth rank”
And this is exactly what you propose here. To reason based on less information than we actually have. We can do it, of course, but then we shouldn’t be surprised that the results are crazy and not correlated with reality which we wanted to reason about in the first place.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data. You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
When you pretend that you do not know something that you actually know—you systematically get wrong results.
Eh, no? Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data.
Then AIXI has the relevant information, while I do not.
You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
A probabilistic model describes knowledge state of an observer and naturally changes when the knowledge state of the observer changes. My ability or inability to extract some information obviously affects which model is appropriate for the problem.
Suppose a coin is tossed and then the outcome is written in Japanese on a piece of paper and this piece of paper is shown to you. Whether or not your credence in the state of the coin changes from equiprobable prior depends on whether you know Japanese or not.
Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
Of course you can. But this way of thinking would be sub-optimal in the situation where you actually has extra information.
I suppose you express skepticism that “being randomly sampled” is a meaningful statement. I’m planning an indepth dive into foundations of probability theory in order to clear these kind of confusions and arrive to a propper definition of what “randomness” even means.
For now, consider the two paper picking examples from the post. Even if you don’t exactly understand what randomness is it should be clear that it works different in these examples. In one case you can get any piece of paper from all the papers in the bag. In the other you may not get a piece of paper at all—empty spaces are added as possible outcomes of the experiment.
Likewise consider the third version of paper picking experiment, where the papers are not picked blindly and you always get the paper with number 6, regardless of the bag. Clearly, this situation is quite different from the previous two. In this experiment you have only one possible outcome, where previously there were multiple of them. So we say that in this case there is no randomness.
Phrase “randomly sampled” means “everything” in probability theory, so it’s not a really an argument.
You don’t need the idea of “souls” to justify anthropics, you can just sample instantiated computations in universe, run on them is_observer, and get answer “yes’ with some probability, which is pretty much well-defined.
I don’t see how what you’ve just written justifies the assumption “I could have been born in distant past or in far future” which is the basis for the Doomsday Inference.
In a world where “I” am just a result of matterial causes and effects, I couldn’t have existed before the causes for my existence existed, and they couldn’t have existed before their causes existed and so on. Therefore the idea that I could’ve existed at some other time dosen’t makes sense.
On the other hand, if the body is a result of specific causes and effects, but “I” am not inseparable with this body “I” am a specific soul that could’ve been instantiated in any body, then the idea that I could’ve been born at some other moment makes sense.
In a world with material causes and effects, when you toss a coin, you are not tossing the platonic ideal of randomness. If you observe coin on table heads up and know how you are going to take it and how you are going to toss it you basically have all information necessary to predict which side it lands. If you are capable to avoid frequentist nonsense “well, probability of coin landing is either 1 or 0 and I can’t do anything about this” by saying “let’s pretend that I don’t have all this information and say that coin lands heads with probability 0.5″, you are also capable to say “let’s pretend that I have zero information about myself except my birth rank” and make proper update on this.
I agree that if you update on full embedded information you are going to get interesting anthropic results, but it’s another problem.
Sure. If you knew all the relevant information for the coin toss, you could predict the outcome. But as you do not know it, you can treat the situation as an iteration of probability experiment with two possible outcomes.
You do not pretend not to have all this information. You actually do not know it! When you reason based on the information available—you get correct application of probability theory, the type of reasoning that systematically produces correct map-territory relations. When you pretend that you do not know something that you actually know—you systematically get wrong results.
And this is exactly what you propose here. To reason based on less information than we actually have. We can do it, of course, but then we shouldn’t be surprised that the results are crazy and not correlated with reality which we wanted to reason about in the first place.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data. You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
Eh, no? Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
Then AIXI has the relevant information, while I do not.
A probabilistic model describes knowledge state of an observer and naturally changes when the knowledge state of the observer changes. My ability or inability to extract some information obviously affects which model is appropriate for the problem.
Suppose a coin is tossed and then the outcome is written in Japanese on a piece of paper and this piece of paper is shown to you. Whether or not your credence in the state of the coin changes from equiprobable prior depends on whether you know Japanese or not.
Of course you can. But this way of thinking would be sub-optimal in the situation where you actually has extra information.
I suppose you express skepticism that “being randomly sampled” is a meaningful statement. I’m planning an indepth dive into foundations of probability theory in order to clear these kind of confusions and arrive to a propper definition of what “randomness” even means.
For now, consider the two paper picking examples from the post. Even if you don’t exactly understand what randomness is it should be clear that it works different in these examples. In one case you can get any piece of paper from all the papers in the bag. In the other you may not get a piece of paper at all—empty spaces are added as possible outcomes of the experiment.
Likewise consider the third version of paper picking experiment, where the papers are not picked blindly and you always get the paper with number 6, regardless of the bag. Clearly, this situation is quite different from the previous two. In this experiment you have only one possible outcome, where previously there were multiple of them. So we say that in this case there is no randomness.
As saying goes, “constant is a random variable”.
And as another saying goes, “A rose by any other name would smell as sweet”.