If you reject both the SIA and SSA priors (in my example, SIA giving 1⁄3 to each of A, B, and C, and SSA giving 1⁄2 to A and 1⁄4 to B and C), then what prior do you give?
Whatever prior you give you will still end up updating as you learn information. There’s no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you’re drawing out here.
If you reject both the SIA and SSA priors (in my example, SIA giving 1⁄3 to each of A, B, and C, and SSA giving 1⁄2 to A and 1⁄4 to B and C), then what prior do you give?
I reject these assumptions, not their priors. The actual assumptions and the methodology behind them have physically incoherent implications- the priors they assign may still be valid, especially in scenarios where it seems like there are exactly two reasonable priors, and they both choose one of them.
Whatever prior you give you will still end up updating as you learn information. There’s no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you’re drawing out here.
The point is not that you’re not allowed to have prior probabilities for what you’re going to experience. I specifically placed a mark on the prior probability of what I expected to experience in the “What if...” section.
If you actually did the sleeping beauty experiment in the real world, it’s very clear that “you would be right most often when you woke up” if you said you were in the world with two observers.
My formulation of those assumptions, as I’ve said, is entirely a prior claim.
If you agree with those priors and Bayes, you get those assumptions.
You can’t say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you’re just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones.
I’m asking for your prior in the specific scenario I gave.
My formulation of those assumptions, as I’ve said, is entirely a prior claim.
You can’t gain non-local information using any method, regardless of the words or models you want to use to contain that information.
If you agree with those priors and Bayes, you get those assumptions.
You cannot reason as if you were selected randomly from the set of all possible observers. This allows you to infer information about what the set of all possible observers looks like, despite provably not having access to that information. There are practical implications of this, the consequences of which were shown in the above post with SSA.
You can’t say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you’re just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones.
It’s not a specific case of sleeping beauty. Sleeping beauty has meaningfully distinct characteristics.
This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
I’m asking for your prior in the specific scenario I gave.
My estimate is 2/3rds for the 2-Observer scenario. Your claims that “priors come before time” makes me want to use different terminology for what we’re talking about here. Your brain is a physical system and is subject to the laws governing other physical systems- whatever you mean by “priors coming before time” isn’t clearly relevant to the physical configuration of the particles in your brain.
The fact that I execute the same Bayesian update with the same prior in this situation does not mean that I “get” SIA- SIA has additional physically incoherent implications.
>This allows you to infer information about what the set of all possible observers looks like
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point. Anyway, there are arguments for particular universal priors, e.g. the Solomonoff universal prior. This is ultimately grounded in Occam’s razor, and Occam can be justified on grounds of usefulness.
>This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
>SIA has additional physically incoherent implications
I don’t see any such implications. You need to simplify and more fully specify your model and example.
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point.
SIA is not isomorphic to “Assign priors based on Kolmogorov Complexity”. If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
Regardless of whether your probabilities entered through your brain under the name of a “prior” or an “update”, the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.
SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you’re using SIA to gesture to the types of conclusions that you’d draw using Solomonoff Induction, I claim definition mismatch.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
I specifically listed the point of the tiling in the paragraph that mentions tiling:
for you to agree that the fact you don’t see a pink pop-up appear provides strong justified evidence that none of the probes saw <event x>
The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.
I don’t see any such implications. You need to simplify and more fully specify your model and example.
There’s phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.
I don’t see any such implications. You need to simplify and more fully specify your model and example.
Just to reiterate, my post isn’t particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.
>If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
No, that’s what I mean by Bayesianism—SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior.
>Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
The bootstrap problem doesn’t mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn’t be using those priors, but that seems irrelevant here.
>SIA has you reason as if you were randomly selected from the set of all possible observers.
Yes, this is literally describing a prior—you have a certain, equal, prior probability of “being” any member of that set (up to weighting and other complications).
>If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like
As I’ve repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It’s still unclear what you think is impermissible in a prior—do you really think one can’t have a prior over what the set of possible observers looks like? If so, you’ll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that’s paradoxical for you or dutchbookable if you indeed reject Bayes as I think you’re doing.
Once you confirm that my fully specified model captures what you’re looking for, I’ll go through the math and show how one applies SIA in detail, in my terms.
If you reject both the SIA and SSA priors (in my example, SIA giving 1⁄3 to each of A, B, and C, and SSA giving 1⁄2 to A and 1⁄4 to B and C), then what prior do you give?
Whatever prior you give you will still end up updating as you learn information. There’s no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you’re drawing out here.
I reject these assumptions, not their priors. The actual assumptions and the methodology behind them have physically incoherent implications- the priors they assign may still be valid, especially in scenarios where it seems like there are exactly two reasonable priors, and they both choose one of them.
The point is not that you’re not allowed to have prior probabilities for what you’re going to experience. I specifically placed a mark on the prior probability of what I expected to experience in the “What if...” section.
If you actually did the sleeping beauty experiment in the real world, it’s very clear that “you would be right most often when you woke up” if you said you were in the world with two observers.
My formulation of those assumptions, as I’ve said, is entirely a prior claim.
If you agree with those priors and Bayes, you get those assumptions.
You can’t say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you’re just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones.
I’m asking for your prior in the specific scenario I gave.
You can’t gain non-local information using any method, regardless of the words or models you want to use to contain that information.
You cannot reason as if you were selected randomly from the set of all possible observers. This allows you to infer information about what the set of all possible observers looks like, despite provably not having access to that information. There are practical implications of this, the consequences of which were shown in the above post with SSA.
It’s not a specific case of sleeping beauty. Sleeping beauty has meaningfully distinct characteristics.
This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
My estimate is 2/3rds for the 2-Observer scenario. Your claims that “priors come before time” makes me want to use different terminology for what we’re talking about here. Your brain is a physical system and is subject to the laws governing other physical systems- whatever you mean by “priors coming before time” isn’t clearly relevant to the physical configuration of the particles in your brain.
The fact that I execute the same Bayesian update with the same prior in this situation does not mean that I “get” SIA- SIA has additional physically incoherent implications.
>This allows you to infer information about what the set of all possible observers looks like
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point. Anyway, there are arguments for particular universal priors, e.g. the Solomonoff universal prior. This is ultimately grounded in Occam’s razor, and Occam can be justified on grounds of usefulness.
>This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
>SIA has additional physically incoherent implications
I don’t see any such implications. You need to simplify and more fully specify your model and example.
SIA is not isomorphic to “Assign priors based on Kolmogorov Complexity”. If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
Regardless of whether your probabilities entered through your brain under the name of a “prior” or an “update”, the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.
SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you’re using SIA to gesture to the types of conclusions that you’d draw using Solomonoff Induction, I claim definition mismatch.
I specifically listed the point of the tiling in the paragraph that mentions tiling:
The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.
There’s phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.
Just to reiterate, my post isn’t particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.
>If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
No, that’s what I mean by Bayesianism—SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior.
>Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
The bootstrap problem doesn’t mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn’t be using those priors, but that seems irrelevant here.
>SIA has you reason as if you were randomly selected from the set of all possible observers.
Yes, this is literally describing a prior—you have a certain, equal, prior probability of “being” any member of that set (up to weighting and other complications).
>If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like
As I’ve repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It’s still unclear what you think is impermissible in a prior—do you really think one can’t have a prior over what the set of possible observers looks like? If so, you’ll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that’s paradoxical for you or dutchbookable if you indeed reject Bayes as I think you’re doing.
Once you confirm that my fully specified model captures what you’re looking for, I’ll go through the math and show how one applies SIA in detail, in my terms.