Can you formulate this as a challenge to SIA in particular? You claim that it affects SIA, but your issue is with reference classes, and SIA doesn’t care about your reference class.
Your probe example is confusingly worded. You include time as a factor but say time doesn’t matter. Can you reduce it to the simplest possible that still yields the paradoxical result you want?
>If you don’t see a pop-up, and you think this somehow allows you to justifiably update in favor of no probes having seen <event x>
I don’t think SIA says you should update in this manner, except very slightly. If I’m understanding your example correctly, all the probes end up tiling their light cones, so the number of sims is equal regardless of what happened. The worlds with fewer probes having seen x become slightly more likely than the prior, but no anthropic reasoning is needed to get that result.
In general, I think of SIA as dictating our prior, while all updates are independent of anthropics. Our posterior is simply the SIA prior conditioned on all facts we know about our own existence. Roughly speaking, SSA represents a prior that we’re equally likely to exist in worlds that are equally likely to exist, while SIA represents a prior that we’re equally likely to be any two observers that are equally likely to exist.
(Separately, I think a lot of this “existence” talk is misguided and we should be talking about probabilities in an expectations sense only, but that’s not really relevant here.)
Can you formulate this as a challenge to SIA in particular? You claim that it affects SIA, but your issue is with reference classes, and SIA doesn’t care about your reference class.
The point is that SIA similarly overextends its reach- it claims to make predictions about phenomena that could not yet have had any effect on your brain’s operation, for reasons demonstrated with SSA in the example in the post.
Your probability estimates can only be affected by a pretty narrow range of stuff, in practice, and because SIA does not deliberately draw the line of all possible observers around “All possible observers which could have so far had impact on my probability estimates, as evidenced by the speed of light and other physical restrictions on the propagation of information”, it unfortunately implies that your probability estimates are corresponding with things which, via physics, they can’t be.
Briefly, “You cannot reason about things which could not yet have had an impact on your brain.”
SSSA/SSA are more common, which is why I focused on them. For the record, I used an example in which SSSA and SSA predict exactly the same things. SIA doesn’t predict the same thing here, but the problem that I gestured to is also present in SIA, but with a less laborious argument.
Your probe example is confusingly worded. You include time as a factor but say time doesn’t matter. Can you reduce it to the simplest possible that still yields the paradoxical result you want?
Yea, sorry- I’m still editing this post. I’ll reword it tomorrow. I’m not sure if I’ll remove that specific disclaimer, though.
We could activate the simulated versions of you at any time- whether or not the other members of your reference class are activated at different times doesn’t matter under standard usage of SIA/SSA/SSSA. I’m just including the extra information that the simulations are all spun up at the same time in case you have some weird disagreement with that, and in order to more closely match intuitive notions of identity.
I included that disclaimer because there’s questions to be had about time- the probes are presumably in differently warped regions of spacetime, thus it’s not so clear what it means to say these events are happening at the same time.
I don’t think SIA says you should update in this manner, except very slightly. If I’m understanding your example correctly, all the probes end up tiling their light cones, so the number of sims is equal regardless of what happened. The worlds with fewer probes having seen x become slightly more likely than the prior, but no anthropic reasoning is needed to get that result.
Only the probes which see <event x> end up tiling their light cones. The point is to change the relative frequencies of the members of your reference class. Because SSA/SSSA assume that you are randomly selected from your reference class, by shifting the relative frequencies of different future observations within your reference class SSA/SSSA imply you can gain information about arbitrary non-local phenomena. This problem is present even outside of this admittedly contrived hypothetical- this contrived hypothetical takes an extra step and turns the problem into an FTL telephone.
It doesn’t seem that there’s any way to put your hand on the scale of the number of possible observers, therefore (as previously remarked) this example doesn’t apply to SIA. The notion that SIA is overextending its reach by claiming to make justified claims about things we can show (using physics) you cannot make justified claims about to still applies.
In general, I think of SIA as dictating our prior, while all updates are independent of anthropics. Our posterior is simply the SIA prior conditioned on all facts we know about our own existence. Roughly speaking, SSA represents a prior that we’re equally likely to exist in worlds that are equally likely to exist, while SIA represents a prior that we’re equally likely to be any two observers that are equally likely to exist.
The problem only gets pushed back- we can also assert that your priors cannot be corresponding to phenomena which (up until now) have been non-local to you. I’m hesitant to say that you’re not allowed to use this form of reasoning- in practice using SIA may be quite useful. However, it’s just important to be clear that SIA does have this invalid implication.
If you reject both the SIA and SSA priors (in my example, SIA giving 1⁄3 to each of A, B, and C, and SSA giving 1⁄2 to A and 1⁄4 to B and C), then what prior do you give?
Whatever prior you give you will still end up updating as you learn information. There’s no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you’re drawing out here.
If you reject both the SIA and SSA priors (in my example, SIA giving 1⁄3 to each of A, B, and C, and SSA giving 1⁄2 to A and 1⁄4 to B and C), then what prior do you give?
I reject these assumptions, not their priors. The actual assumptions and the methodology behind them have physically incoherent implications- the priors they assign may still be valid, especially in scenarios where it seems like there are exactly two reasonable priors, and they both choose one of them.
Whatever prior you give you will still end up updating as you learn information. There’s no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you’re drawing out here.
The point is not that you’re not allowed to have prior probabilities for what you’re going to experience. I specifically placed a mark on the prior probability of what I expected to experience in the “What if...” section.
If you actually did the sleeping beauty experiment in the real world, it’s very clear that “you would be right most often when you woke up” if you said you were in the world with two observers.
My formulation of those assumptions, as I’ve said, is entirely a prior claim.
If you agree with those priors and Bayes, you get those assumptions.
You can’t say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you’re just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones.
I’m asking for your prior in the specific scenario I gave.
My formulation of those assumptions, as I’ve said, is entirely a prior claim.
You can’t gain non-local information using any method, regardless of the words or models you want to use to contain that information.
If you agree with those priors and Bayes, you get those assumptions.
You cannot reason as if you were selected randomly from the set of all possible observers. This allows you to infer information about what the set of all possible observers looks like, despite provably not having access to that information. There are practical implications of this, the consequences of which were shown in the above post with SSA.
You can’t say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you’re just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones.
It’s not a specific case of sleeping beauty. Sleeping beauty has meaningfully distinct characteristics.
This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
I’m asking for your prior in the specific scenario I gave.
My estimate is 2/3rds for the 2-Observer scenario. Your claims that “priors come before time” makes me want to use different terminology for what we’re talking about here. Your brain is a physical system and is subject to the laws governing other physical systems- whatever you mean by “priors coming before time” isn’t clearly relevant to the physical configuration of the particles in your brain.
The fact that I execute the same Bayesian update with the same prior in this situation does not mean that I “get” SIA- SIA has additional physically incoherent implications.
>This allows you to infer information about what the set of all possible observers looks like
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point. Anyway, there are arguments for particular universal priors, e.g. the Solomonoff universal prior. This is ultimately grounded in Occam’s razor, and Occam can be justified on grounds of usefulness.
>This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
>SIA has additional physically incoherent implications
I don’t see any such implications. You need to simplify and more fully specify your model and example.
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point.
SIA is not isomorphic to “Assign priors based on Kolmogorov Complexity”. If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
Regardless of whether your probabilities entered through your brain under the name of a “prior” or an “update”, the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.
SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you’re using SIA to gesture to the types of conclusions that you’d draw using Solomonoff Induction, I claim definition mismatch.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
I specifically listed the point of the tiling in the paragraph that mentions tiling:
for you to agree that the fact you don’t see a pink pop-up appear provides strong justified evidence that none of the probes saw <event x>
The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.
I don’t see any such implications. You need to simplify and more fully specify your model and example.
There’s phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.
I don’t see any such implications. You need to simplify and more fully specify your model and example.
Just to reiterate, my post isn’t particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.
>If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
No, that’s what I mean by Bayesianism—SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior.
>Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
The bootstrap problem doesn’t mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn’t be using those priors, but that seems irrelevant here.
>SIA has you reason as if you were randomly selected from the set of all possible observers.
Yes, this is literally describing a prior—you have a certain, equal, prior probability of “being” any member of that set (up to weighting and other complications).
>If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like
As I’ve repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It’s still unclear what you think is impermissible in a prior—do you really think one can’t have a prior over what the set of possible observers looks like? If so, you’ll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that’s paradoxical for you or dutchbookable if you indeed reject Bayes as I think you’re doing.
Once you confirm that my fully specified model captures what you’re looking for, I’ll go through the math and show how one applies SIA in detail, in my terms.
Only the probes which see <event x> end up tiling their light cones.
The version of the post I responded to said that all probes eventually turn on simulations. Let me know when you have an SIA version, please.
we can also assert that your priors cannot be corresponding to phenomena which (up until now) have been non-local to you
The up until now part of this is nonsense—priors come before time. Other than that, I see no reason to place such a limitation on priors, and if you formalize this I can probably find a simple counterexample. What does it even mean for a prior to correspond to a phenomena?
All SIA is doing is asserting events A, B, and C are equal prior probability. (A is living in universe 1 which has 1 observer, B and C are living in universe 2 with 2 observers and being the first and second observer respectively. B and C can be non-local.)
Briefly, “You cannot reason about things which could not yet have had an impact on your brain.”
If you knew for a fact that something couldn’t have had an impact, this might be valid. But in your scenarios, these could have had an impact, yet didn’t. It’s a perfectly valid update.
You should simplify to having exactly one clone created. In fact, I suspect you can state your “paradox” in terms of Sleeping Beauty—this seems similar to some arguments people give against SIA there, claiming one does not acquire new evidence upon waking. I think this is incorrect—one learns that one has woken in the SB scenario, which on SIA’s priors leads one to update to the thirder position.
The version of the post I responded to said that all probes eventually turn on simulations.
The probes which run the simulations of you without the pop-up run exactly one. The simulation is run “on the probe.”
Let me know when you have an SIA version, please.
I’m not going to write a new post for SIA specifically- I already demonstrated a generalized problem with these assumptions.
The up until now part of this is nonsense—priors come before time. Other than that, I see no reason to place such a limitation on priors, and if you formalize this I can probably find a simple counterexample. What does it even mean for a prior to correspond to a phenomena?
Your entire brain is a physical system, it must abide by the laws of physics. You are limited on what your priors can be by this very fact- there is some stuff that the position of the particles in your brain could not have yet been affected by (by the very laws of physics).
The fact that you use some set of priors is a physical phenomenon. If human brains acquire information in ways that do not respect locality, you can break all of the rules, acquire infinite power, etc.
Up until now refers to the fact that the phenomena have, up until now, been unable to affect your brain.
I wrote a whole post trying to get people to look at the ideas behind this problem, see above. If you don’t see the implication, I’m not going to further elaborate on it, sorry.
All SIA is doing is asserting events A, B, and C are equal prior probability. (A is living in universe 1 which has 1 observer, B and C are living in universe 2 with 2 observers and being the first and second observer respectively. B and C can be non-local.)
SIA is asserting more than events A, B, and C are equal prior probability.
Sleeping Beauty and these hypotheticals here are different- these hypotheticals make you observe something that is unreasonably unlikely in one hypothesis but very likely in another, and then show that you can’t update your confidences in these hypothesis in the dramatic way demonstrated in the first hypothetical.
You can’t change the number of possible observers, so you can’t turn SIA into an FTL telephone. SIA still makes the same mistake that allows you to turn SSA/SSSA into FTL telephones, though.
If you knew for a fact that something couldn’t have had an impact, this might be valid. But in your scenarios, these could have had an impact, yet didn’t. It’s a perfectly valid update.
There really couldn’t have been an impact. The versions of you that wake up and don’t see pop-ups (and their brains) could not have been affected by what’s going on with the other probes- they are outside of one another’s cosmological horizon. You could design similar situations where your brain eventually could be affected by them, but you’re still updating prematurely.
I told you the specific types of updates that you’d be allowed to make. Those are the only ones you can justifiably say are corresponding to anything- as in, are as the result of any observations you’ve made. If you don’t see a pop-up, not all of the probes saw <event x>, your probe didn’t see <event x>, you’re a person who didn’t see a pop-up, etc. If you see a pop-up, your assigned probe saw <event x>, and thus at least one probe saw <event x>, and you are a pop-up person, etc.
However, you can’t do anything remotely looking like the update mentioned in the first hypothetical. You’re only learning information about your specific probe’s fate, and what type of copy you ended up being.
You should simplify to having exactly one clone created. In fact, I suspect you can state your “paradox” in terms of Sleeping Beauty—this seems similar to some arguments people give against SIA there, claiming one does not acquire new evidence upon waking. I think this is incorrect—one learns that one has woken in the SB scenario, which on SIA’s priors leads one to update to the thirder position.
You can’t simplify to having exactly one clone created.
There is a different problem going on here than in the SB scenario. I mostly agree with the 1/3rds position- you’re least inaccurate when your estimate for the 2-Observer scenario is 2/3rds. I don’t agree with the generalized principle behind that position, though. It requires adjustments, in order to be more clear about what it is you’re doing, and why you’re doing it.
>The fact that you use some set of priors is a physical phenomenon.
Sure, but irrelevant. My prior is exactly the same in all scenarios—I am chosen randomly from the set of observers according to the Solomonoff universal prior. I condition based on my experiences, updating this prior to a posterior, which is Solomonoff induction. This process reproduces all the predictions of SIA. No part of this process requires information that I can’t physically get access to, except the part that requires actually computing Solomonoff as it’s uncomputable. In practice, we approximate the result of Solomonoff as best we can, just like we can never actually put pure Bayesianism into effect.
Just claiming that you’ve disproven some theory with an unnecessarily complex example that’s not targeted towards the theory in question and refusing to elaborate isn’t going to convince many.
You should also stop talking as if your paradoxes prove anything. At best, they present a bullet that various anthropic theories need to bite, and which some people may find counter-intuitive. I don’t find it counter-intuitive, but I might not be understanding the core of your theory yet.
>SIA is asserting more than events A, B, and C are equal prior probability.
Like what?
I’m going to put together a simplified version of your scenario and model it out carefully with priors and posteriors to explain where you’re going wrong.
Can you formulate this as a challenge to SIA in particular? You claim that it affects SIA, but your issue is with reference classes, and SIA doesn’t care about your reference class.
Your probe example is confusingly worded. You include time as a factor but say time doesn’t matter. Can you reduce it to the simplest possible that still yields the paradoxical result you want?
>If you don’t see a pop-up, and you think this somehow allows you to justifiably update in favor of no probes having seen <event x>
I don’t think SIA says you should update in this manner, except very slightly. If I’m understanding your example correctly, all the probes end up tiling their light cones, so the number of sims is equal regardless of what happened. The worlds with fewer probes having seen x become slightly more likely than the prior, but no anthropic reasoning is needed to get that result.
In general, I think of SIA as dictating our prior, while all updates are independent of anthropics. Our posterior is simply the SIA prior conditioned on all facts we know about our own existence. Roughly speaking, SSA represents a prior that we’re equally likely to exist in worlds that are equally likely to exist, while SIA represents a prior that we’re equally likely to be any two observers that are equally likely to exist.
(Separately, I think a lot of this “existence” talk is misguided and we should be talking about probabilities in an expectations sense only, but that’s not really relevant here.)
The point is that SIA similarly overextends its reach- it claims to make predictions about phenomena that could not yet have had any effect on your brain’s operation, for reasons demonstrated with SSA in the example in the post.
Your probability estimates can only be affected by a pretty narrow range of stuff, in practice, and because SIA does not deliberately draw the line of all possible observers around “All possible observers which could have so far had impact on my probability estimates, as evidenced by the speed of light and other physical restrictions on the propagation of information”, it unfortunately implies that your probability estimates are corresponding with things which, via physics, they can’t be.
Briefly, “You cannot reason about things which could not yet have had an impact on your brain.”
SSSA/SSA are more common, which is why I focused on them. For the record, I used an example in which SSSA and SSA predict exactly the same things. SIA doesn’t predict the same thing here, but the problem that I gestured to is also present in SIA, but with a less laborious argument.
Yea, sorry- I’m still editing this post. I’ll reword it tomorrow. I’m not sure if I’ll remove that specific disclaimer, though.
We could activate the simulated versions of you at any time- whether or not the other members of your reference class are activated at different times doesn’t matter under standard usage of SIA/SSA/SSSA. I’m just including the extra information that the simulations are all spun up at the same time in case you have some weird disagreement with that, and in order to more closely match intuitive notions of identity.
I included that disclaimer because there’s questions to be had about time- the probes are presumably in differently warped regions of spacetime, thus it’s not so clear what it means to say these events are happening at the same time.
Only the probes which see <event x> end up tiling their light cones. The point is to change the relative frequencies of the members of your reference class. Because SSA/SSSA assume that you are randomly selected from your reference class, by shifting the relative frequencies of different future observations within your reference class SSA/SSSA imply you can gain information about arbitrary non-local phenomena. This problem is present even outside of this admittedly contrived hypothetical- this contrived hypothetical takes an extra step and turns the problem into an FTL telephone.
It doesn’t seem that there’s any way to put your hand on the scale of the number of possible observers, therefore (as previously remarked) this example doesn’t apply to SIA. The notion that SIA is overextending its reach by claiming to make justified claims about things we can show (using physics) you cannot make justified claims about to still applies.
The problem only gets pushed back- we can also assert that your priors cannot be corresponding to phenomena which (up until now) have been non-local to you. I’m hesitant to say that you’re not allowed to use this form of reasoning- in practice using SIA may be quite useful. However, it’s just important to be clear that SIA does have this invalid implication.
If you reject both the SIA and SSA priors (in my example, SIA giving 1⁄3 to each of A, B, and C, and SSA giving 1⁄2 to A and 1⁄4 to B and C), then what prior do you give?
Whatever prior you give you will still end up updating as you learn information. There’s no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you’re drawing out here.
I reject these assumptions, not their priors. The actual assumptions and the methodology behind them have physically incoherent implications- the priors they assign may still be valid, especially in scenarios where it seems like there are exactly two reasonable priors, and they both choose one of them.
The point is not that you’re not allowed to have prior probabilities for what you’re going to experience. I specifically placed a mark on the prior probability of what I expected to experience in the “What if...” section.
If you actually did the sleeping beauty experiment in the real world, it’s very clear that “you would be right most often when you woke up” if you said you were in the world with two observers.
My formulation of those assumptions, as I’ve said, is entirely a prior claim.
If you agree with those priors and Bayes, you get those assumptions.
You can’t say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you’re just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones.
I’m asking for your prior in the specific scenario I gave.
You can’t gain non-local information using any method, regardless of the words or models you want to use to contain that information.
You cannot reason as if you were selected randomly from the set of all possible observers. This allows you to infer information about what the set of all possible observers looks like, despite provably not having access to that information. There are practical implications of this, the consequences of which were shown in the above post with SSA.
It’s not a specific case of sleeping beauty. Sleeping beauty has meaningfully distinct characteristics.
This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
My estimate is 2/3rds for the 2-Observer scenario. Your claims that “priors come before time” makes me want to use different terminology for what we’re talking about here. Your brain is a physical system and is subject to the laws governing other physical systems- whatever you mean by “priors coming before time” isn’t clearly relevant to the physical configuration of the particles in your brain.
The fact that I execute the same Bayesian update with the same prior in this situation does not mean that I “get” SIA- SIA has additional physically incoherent implications.
>This allows you to infer information about what the set of all possible observers looks like
I don’t understand why you’re calling a prior “inference”. Priors come prior to inferences, that’s the point. Anyway, there are arguments for particular universal priors, e.g. the Solomonoff universal prior. This is ultimately grounded in Occam’s razor, and Occam can be justified on grounds of usefulness.
>This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.
It clearly is unnecessary—nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating.
>SIA has additional physically incoherent implications
I don’t see any such implications. You need to simplify and more fully specify your model and example.
SIA is not isomorphic to “Assign priors based on Kolmogorov Complexity”. If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
Regardless of whether your probabilities entered through your brain under the name of a “prior” or an “update”, the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.
SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you’re using SIA to gesture to the types of conclusions that you’d draw using Solomonoff Induction, I claim definition mismatch.
I specifically listed the point of the tiling in the paragraph that mentions tiling:
The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.
There’s phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.
Just to reiterate, my post isn’t particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.
>If what you mean by SIA is something more along the lines of “Constantly update on all computable hypotheses ranked by Kolmogorov Complexity”, then our definitions have desynced.
No, that’s what I mean by Bayesianism—SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior.
>Also, remember: you need to select your priors based on inferences in real life. You’re a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.
The bootstrap problem doesn’t mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn’t be using those priors, but that seems irrelevant here.
>SIA has you reason as if you were randomly selected from the set of all possible observers.
Yes, this is literally describing a prior—you have a certain, equal, prior probability of “being” any member of that set (up to weighting and other complications).
>If you think you’re randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like
As I’ve repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It’s still unclear what you think is impermissible in a prior—do you really think one can’t have a prior over what the set of possible observers looks like? If so, you’ll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that’s paradoxical for you or dutchbookable if you indeed reject Bayes as I think you’re doing.
Once you confirm that my fully specified model captures what you’re looking for, I’ll go through the math and show how one applies SIA in detail, in my terms.
The version of the post I responded to said that all probes eventually turn on simulations. Let me know when you have an SIA version, please.
The up until now part of this is nonsense—priors come before time. Other than that, I see no reason to place such a limitation on priors, and if you formalize this I can probably find a simple counterexample. What does it even mean for a prior to correspond to a phenomena?
All SIA is doing is asserting events A, B, and C are equal prior probability. (A is living in universe 1 which has 1 observer, B and C are living in universe 2 with 2 observers and being the first and second observer respectively. B and C can be non-local.)
If you knew for a fact that something couldn’t have had an impact, this might be valid. But in your scenarios, these could have had an impact, yet didn’t. It’s a perfectly valid update.
You should simplify to having exactly one clone created. In fact, I suspect you can state your “paradox” in terms of Sleeping Beauty—this seems similar to some arguments people give against SIA there, claiming one does not acquire new evidence upon waking. I think this is incorrect—one learns that one has woken in the SB scenario, which on SIA’s priors leads one to update to the thirder position.
The probes which run the simulations of you without the pop-up run exactly one. The simulation is run “on the probe.”
I’m not going to write a new post for SIA specifically- I already demonstrated a generalized problem with these assumptions.
Your entire brain is a physical system, it must abide by the laws of physics. You are limited on what your priors can be by this very fact- there is some stuff that the position of the particles in your brain could not have yet been affected by (by the very laws of physics).
The fact that you use some set of priors is a physical phenomenon. If human brains acquire information in ways that do not respect locality, you can break all of the rules, acquire infinite power, etc.
Up until now refers to the fact that the phenomena have, up until now, been unable to affect your brain.
I wrote a whole post trying to get people to look at the ideas behind this problem, see above. If you don’t see the implication, I’m not going to further elaborate on it, sorry.
SIA is asserting more than events A, B, and C are equal prior probability.
Sleeping Beauty and these hypotheticals here are different- these hypotheticals make you observe something that is unreasonably unlikely in one hypothesis but very likely in another, and then show that you can’t update your confidences in these hypothesis in the dramatic way demonstrated in the first hypothetical.
You can’t change the number of possible observers, so you can’t turn SIA into an FTL telephone. SIA still makes the same mistake that allows you to turn SSA/SSSA into FTL telephones, though.
There really couldn’t have been an impact. The versions of you that wake up and don’t see pop-ups (and their brains) could not have been affected by what’s going on with the other probes- they are outside of one another’s cosmological horizon. You could design similar situations where your brain eventually could be affected by them, but you’re still updating prematurely.
I told you the specific types of updates that you’d be allowed to make. Those are the only ones you can justifiably say are corresponding to anything- as in, are as the result of any observations you’ve made. If you don’t see a pop-up, not all of the probes saw <event x>, your probe didn’t see <event x>, you’re a person who didn’t see a pop-up, etc. If you see a pop-up, your assigned probe saw <event x>, and thus at least one probe saw <event x>, and you are a pop-up person, etc.
However, you can’t do anything remotely looking like the update mentioned in the first hypothetical. You’re only learning information about your specific probe’s fate, and what type of copy you ended up being.
You can’t simplify to having exactly one clone created.
There is a different problem going on here than in the SB scenario. I mostly agree with the 1/3rds position- you’re least inaccurate when your estimate for the 2-Observer scenario is 2/3rds. I don’t agree with the generalized principle behind that position, though. It requires adjustments, in order to be more clear about what it is you’re doing, and why you’re doing it.
>The fact that you use some set of priors is a physical phenomenon.
Sure, but irrelevant. My prior is exactly the same in all scenarios—I am chosen randomly from the set of observers according to the Solomonoff universal prior. I condition based on my experiences, updating this prior to a posterior, which is Solomonoff induction. This process reproduces all the predictions of SIA. No part of this process requires information that I can’t physically get access to, except the part that requires actually computing Solomonoff as it’s uncomputable. In practice, we approximate the result of Solomonoff as best we can, just like we can never actually put pure Bayesianism into effect.
Just claiming that you’ve disproven some theory with an unnecessarily complex example that’s not targeted towards the theory in question and refusing to elaborate isn’t going to convince many.
You should also stop talking as if your paradoxes prove anything. At best, they present a bullet that various anthropic theories need to bite, and which some people may find counter-intuitive. I don’t find it counter-intuitive, but I might not be understanding the core of your theory yet.
>SIA is asserting more than events A, B, and C are equal prior probability.
Like what?
I’m going to put together a simplified version of your scenario and model it out carefully with priors and posteriors to explain where you’re going wrong.