>This allows you to infer information about what the set of all possible observers looks like
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point. Anyway, there are arguments for particular universal priors, e.g. the Solomonoff universal prior. This is ultimately grounded in Occam’s razor, and Occam can be justified on grounds of usefulness.
>This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
>SIA has additional physically incoherent implications
I don’t see any such implications. You need to simplify and more fully specify your model and example.
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point.
SIA is not isomorphic to “Assign priors based on Kolmogorov Complexity”. If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
Regardless of whether your probabilities entered through your brain under the name of a “prior” or an “update”, the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.
SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you’re using SIA to gesture to the types of conclusions that you’d draw using Solomonoff Induction, I claim definition mismatch.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
I specifically listed the point of the tiling in the paragraph that mentions tiling:
for you to agree that the fact you don’t see a pink pop-up appear provides strong justified evidence that none of the probes saw <event x>
The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.
I don’t see any such implications. You need to simplify and more fully specify your model and example.
There’s phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.
I don’t see any such implications. You need to simplify and more fully specify your model and example.
Just to reiterate, my post isn’t particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.
>If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
No, that’s what I mean by Bayesianism—SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior.
>Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
The bootstrap problem doesn’t mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn’t be using those priors, but that seems irrelevant here.
>SIA has you reason as if you were randomly selected from the set of all possible observers.
Yes, this is literally describing a prior—you have a certain, equal, prior probability of “being” any member of that set (up to weighting and other complications).
>If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like
As I’ve repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It’s still unclear what you think is impermissible in a prior—do you really think one can’t have a prior over what the set of possible observers looks like? If so, you’ll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that’s paradoxical for you or dutchbookable if you indeed reject Bayes as I think you’re doing.
Once you confirm that my fully specified model captures what you’re looking for, I’ll go through the math and show how one applies SIA in detail, in my terms.
>This allows you to infer information about what the set of all possible observers looks like
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point. Anyway, there are arguments for particular universal priors, e.g. the Solomonoff universal prior. This is ultimately grounded in Occam’s razor, and Occam can be justified on grounds of usefulness.
>This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
>SIA has additional physically incoherent implications
I don’t see any such implications. You need to simplify and more fully specify your model and example.
SIA is not isomorphic to “Assign priors based on Kolmogorov Complexity”. If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
Regardless of whether your probabilities entered through your brain under the name of a “prior” or an “update”, the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.
SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you’re using SIA to gesture to the types of conclusions that you’d draw using Solomonoff Induction, I claim definition mismatch.
I specifically listed the point of the tiling in the paragraph that mentions tiling:
The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.
There’s phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.
Just to reiterate, my post isn’t particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.
>If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
No, that’s what I mean by Bayesianism—SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior.
>Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
The bootstrap problem doesn’t mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn’t be using those priors, but that seems irrelevant here.
>SIA has you reason as if you were randomly selected from the set of all possible observers.
Yes, this is literally describing a prior—you have a certain, equal, prior probability of “being” any member of that set (up to weighting and other complications).
>If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like
As I’ve repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It’s still unclear what you think is impermissible in a prior—do you really think one can’t have a prior over what the set of possible observers looks like? If so, you’ll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that’s paradoxical for you or dutchbookable if you indeed reject Bayes as I think you’re doing.
Once you confirm that my fully specified model captures what you’re looking for, I’ll go through the math and show how one applies SIA in detail, in my terms.