The version of the post I responded to said that all probes eventually turn on simulations.
The probes which run the simulations of you without the pop-up run exactly one. The simulation is run “on the probe.”
Let me know when you have an SIA version, please.
I’m not going to write a new post for SIA specifically- I already demonstrated a generalized problem with these assumptions.
The up until now part of this is nonsense—priors come before time. Other than that, I see no reason to place such a limitation on priors, and if you formalize this I can probably find a simple counterexample. What does it even mean for a prior to correspond to a phenomena?
Your entire brain is a physical system, it must abide by the laws of physics. You are limited on what your priors can be by this very fact- there is some stuff that the position of the particles in your brain could not have yet been affected by (by the very laws of physics).
The fact that you use some set of priors is a physical phenomenon. If human brains acquire information in ways that do not respect locality, you can break all of the rules, acquire infinite power, etc.
Up until now refers to the fact that the phenomena have, up until now, been unable to affect your brain.
I wrote a whole post trying to get people to look at the ideas behind this problem, see above. If you don’t see the implication, I’m not going to further elaborate on it, sorry.
All SIA is doing is asserting events A, B, and C are equal prior probability. (A is living in universe 1 which has 1 observer, B and C are living in universe 2 with 2 observers and being the first and second observer respectively. B and C can be non-local.)
SIA is asserting more than events A, B, and C are equal prior probability.
Sleeping Beauty and these hypotheticals here are different- these hypotheticals make you observe something that is unreasonably unlikely in one hypothesis but very likely in another, and then show that you can’t update your confidences in these hypothesis in the dramatic way demonstrated in the first hypothetical.
You can’t change the number of possible observers, so you can’t turn SIA into an FTL telephone. SIA still makes the same mistake that allows you to turn SSA/SSSA into FTL telephones, though.
If you knew for a fact that something couldn’t have had an impact, this might be valid. But in your scenarios, these could have had an impact, yet didn’t. It’s a perfectly valid update.
There really couldn’t have been an impact. The versions of you that wake up and don’t see pop-ups (and their brains) could not have been affected by what’s going on with the other probes- they are outside of one another’s cosmological horizon. You could design similar situations where your brain eventually could be affected by them, but you’re still updating prematurely.
I told you the specific types of updates that you’d be allowed to make. Those are the only ones you can justifiably say are corresponding to anything- as in, are as the result of any observations you’ve made. If you don’t see a pop-up, not all of the probes saw <event x>, your probe didn’t see <event x>, you’re a person who didn’t see a pop-up, etc. If you see a pop-up, your assigned probe saw <event x>, and thus at least one probe saw <event x>, and you are a pop-up person, etc.
However, you can’t do anything remotely looking like the update mentioned in the first hypothetical. You’re only learning information about your specific probe’s fate, and what type of copy you ended up being.
You should simplify to having exactly one clone created. In fact, I suspect you can state your “paradox” in terms of Sleeping Beauty—this seems similar to some arguments people give against SIA there, claiming one does not acquire new evidence upon waking. I think this is incorrect—one learns that one has woken in the SB scenario, which on SIA’s priors leads one to update to the thirder position.
You can’t simplify to having exactly one clone created.
There is a different problem going on here than in the SB scenario. I mostly agree with the 1/3rds position- you’re least inaccurate when your estimate for the 2-Observer scenario is 2/3rds. I don’t agree with the generalized principle behind that position, though. It requires adjustments, in order to be more clear about what it is you’re doing, and why you’re doing it.
>The fact that you use some set of priors is a physical phenomenon.
Sure, but irrelevant. My prior is exactly the same in all scenarios—I am chosen randomly from the set of observers according to the Solomonoff universal prior. I condition based on my experiences, updating this prior to a posterior, which is Solomonoff induction. This process reproduces all the predictions of SIA. No part of this process requires information that I can’t physically get access to, except the part that requires actually computing Solomonoff as it’s uncomputable. In practice, we approximate the result of Solomonoff as best we can, just like we can never actually put pure Bayesianism into effect.
Just claiming that you’ve disproven some theory with an unnecessarily complex example that’s not targeted towards the theory in question and refusing to elaborate isn’t going to convince many.
You should also stop talking as if your paradoxes prove anything. At best, they present a bullet that various anthropic theories need to bite, and which some people may find counter-intuitive. I don’t find it counter-intuitive, but I might not be understanding the core of your theory yet.
>SIA is asserting more than events A, B, and C are equal prior probability.
Like what?
I’m going to put together a simplified version of your scenario and model it out carefully with priors and posteriors to explain where you’re going wrong.
The probes which run the simulations of you without the pop-up run exactly one. The simulation is run “on the probe.”
I’m not going to write a new post for SIA specifically- I already demonstrated a generalized problem with these assumptions.
Your entire brain is a physical system, it must abide by the laws of physics. You are limited on what your priors can be by this very fact- there is some stuff that the position of the particles in your brain could not have yet been affected by (by the very laws of physics).
The fact that you use some set of priors is a physical phenomenon. If human brains acquire information in ways that do not respect locality, you can break all of the rules, acquire infinite power, etc.
Up until now refers to the fact that the phenomena have, up until now, been unable to affect your brain.
I wrote a whole post trying to get people to look at the ideas behind this problem, see above. If you don’t see the implication, I’m not going to further elaborate on it, sorry.
SIA is asserting more than events A, B, and C are equal prior probability.
Sleeping Beauty and these hypotheticals here are different- these hypotheticals make you observe something that is unreasonably unlikely in one hypothesis but very likely in another, and then show that you can’t update your confidences in these hypothesis in the dramatic way demonstrated in the first hypothetical.
You can’t change the number of possible observers, so you can’t turn SIA into an FTL telephone. SIA still makes the same mistake that allows you to turn SSA/SSSA into FTL telephones, though.
There really couldn’t have been an impact. The versions of you that wake up and don’t see pop-ups (and their brains) could not have been affected by what’s going on with the other probes- they are outside of one another’s cosmological horizon. You could design similar situations where your brain eventually could be affected by them, but you’re still updating prematurely.
I told you the specific types of updates that you’d be allowed to make. Those are the only ones you can justifiably say are corresponding to anything- as in, are as the result of any observations you’ve made. If you don’t see a pop-up, not all of the probes saw <event x>, your probe didn’t see <event x>, you’re a person who didn’t see a pop-up, etc. If you see a pop-up, your assigned probe saw <event x>, and thus at least one probe saw <event x>, and you are a pop-up person, etc.
However, you can’t do anything remotely looking like the update mentioned in the first hypothetical. You’re only learning information about your specific probe’s fate, and what type of copy you ended up being.
You can’t simplify to having exactly one clone created.
There is a different problem going on here than in the SB scenario. I mostly agree with the 1/3rds position- you’re least inaccurate when your estimate for the 2-Observer scenario is 2/3rds. I don’t agree with the generalized principle behind that position, though. It requires adjustments, in order to be more clear about what it is you’re doing, and why you’re doing it.
>The fact that you use some set of priors is a physical phenomenon.
Sure, but irrelevant. My prior is exactly the same in all scenarios—I am chosen randomly from the set of observers according to the Solomonoff universal prior. I condition based on my experiences, updating this prior to a posterior, which is Solomonoff induction. This process reproduces all the predictions of SIA. No part of this process requires information that I can’t physically get access to, except the part that requires actually computing Solomonoff as it’s uncomputable. In practice, we approximate the result of Solomonoff as best we can, just like we can never actually put pure Bayesianism into effect.
Just claiming that you’ve disproven some theory with an unnecessarily complex example that’s not targeted towards the theory in question and refusing to elaborate isn’t going to convince many.
You should also stop talking as if your paradoxes prove anything. At best, they present a bullet that various anthropic theories need to bite, and which some people may find counter-intuitive. I don’t find it counter-intuitive, but I might not be understanding the core of your theory yet.
>SIA is asserting more than events A, B, and C are equal prior probability.
Like what?
I’m going to put together a simplified version of your scenario and model it out carefully with priors and posteriors to explain where you’re going wrong.