The probability question is straightforward, and is indeed about a 1000/1001 chance of tropical paradise. If this does not make sense, feel free to ask about it,
To me, this seems to neglect the prospect of someone else simulating the exact scene a bunch more times, somewhere out in time and space. To me, once you’ve cut yourself loose of Occam’s Razor/Kolmogorov Complexity and started assigning probabilities as frequencies throughout a space-time continuum in which identical subjective agent-moments occur multiply, you have long since left behind Cox’s Theorem and the use of probability to reason over limited information.
this seems to neglect the prospect of someone else simulating the exact scene a bunch more times, somewhere out in time and space
This is true—and I do think the probability of this is negligible. Additional simulations of our universe wouldn’t change the probabilities—you’d need the simulator to interfere in a very specific way that seems unlikely to me.
once you’ve cut yourself loose of Occam’s Razor/Kolmogorov Complexity and started assigning probabilities as frequencies throughout a space-time continuum in which identical subjective agent-moments occur multiply
Why do those conflict at all? I feel like you may be talking about a nonstandard use of occam’s razor.
long since left behind [...] the use of probability
What probability do you give the simulation hypothesis?
What probability do you give the simulation hypothesis?
Some extremely low prior based on its necessary complexity.
This is true—and I do think the probability of this is negligible.
No, you have no information about that probability. You can assign a complexity prior to it and nothing more.
Why do those conflict at all? I feel like you may be talking about a nonstandard use of occam’s razor.
They conflict because you have two perspectives, and therefore two different sets of information, and therefore two very different distributions. Assume the scenario happens: the person running the simulation from outside has information about the simulation. They have the evidence necessary to defeat the low prior on “everything So and So experiences is a simulation”. So and So himself… does not have that information. His limited information, from sensory data that exactly matches the real, physical, lawful world rather than the mutable simulated environment, rationally justifies a distribution in which, “This is all physically real and I am in fact not going to a tropical paradise in the next minute because I’m not a computer simulation” is the Maximum a Posteriori hypothesis, taking up the vast majority of the probability mass.
So, the standard Bayesian analogue of Solomonoff induction is to put a complexity prior over computable predictions about future sensory inputs. If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs. I think it is unfounded substrate chauvinism to think that your sensory inputs are less complicated to specify than those of an uploaded copy of yourself.
If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs.
Firstly, this isn’t a Solomonoff-analogous prior. It is the Solomonoff prior. Solomonoff Induction is Bayesian.
Secondly, my objection is that in all circumstances, if right-now-me does not possess actual information about uploaded or simulated copies of myself, then the simplest explanation for physically-explicable sensory inputs (ie: sensory inputs that don’t vary between physical and simulated copies), the explanation with the lowest Kolmogorov complexity, is that I am physical and also the only copy of myself in existence at the present time.
This means that the 1000 simulated copies must arrive to an incorrect conclusion for rational reasons: the scenario you invented deliberately, maliciously strips them of any means to distinguish themselves from the original, physical me. A rational agent cannot be expected to necessarily win in adversarially-constructed situations.
I think the grandparent’s argument really had more to do with “reason(ing) over limited information” vs frequencies in a possibly infinite space-time continuum. That still seems like a weak objection, given that anthropics look related to the topic of fixing Solomonoff induction.
To me, this seems to neglect the prospect of someone else simulating the exact scene a bunch more times, somewhere out in time and space. To me, once you’ve cut yourself loose of Occam’s Razor/Kolmogorov Complexity and started assigning probabilities as frequencies throughout a space-time continuum in which identical subjective agent-moments occur multiply, you have long since left behind Cox’s Theorem and the use of probability to reason over limited information.
This is true—and I do think the probability of this is negligible. Additional simulations of our universe wouldn’t change the probabilities—you’d need the simulator to interfere in a very specific way that seems unlikely to me.
Why do those conflict at all? I feel like you may be talking about a nonstandard use of occam’s razor.
What probability do you give the simulation hypothesis?
Some extremely low prior based on its necessary complexity.
No, you have no information about that probability. You can assign a complexity prior to it and nothing more.
They conflict because you have two perspectives, and therefore two different sets of information, and therefore two very different distributions. Assume the scenario happens: the person running the simulation from outside has information about the simulation. They have the evidence necessary to defeat the low prior on “everything So and So experiences is a simulation”. So and So himself… does not have that information. His limited information, from sensory data that exactly matches the real, physical, lawful world rather than the mutable simulated environment, rationally justifies a distribution in which, “This is all physically real and I am in fact not going to a tropical paradise in the next minute because I’m not a computer simulation” is the Maximum a Posteriori hypothesis, taking up the vast majority of the probability mass.
So, the standard Bayesian analogue of Solomonoff induction is to put a complexity prior over computable predictions about future sensory inputs. If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs. I think it is unfounded substrate chauvinism to think that your sensory inputs are less complicated to specify than those of an uploaded copy of yourself.
Firstly, this isn’t a Solomonoff-analogous prior. It is the Solomonoff prior. Solomonoff Induction is Bayesian.
Secondly, my objection is that in all circumstances, if right-now-me does not possess actual information about uploaded or simulated copies of myself, then the simplest explanation for physically-explicable sensory inputs (ie: sensory inputs that don’t vary between physical and simulated copies), the explanation with the lowest Kolmogorov complexity, is that I am physical and also the only copy of myself in existence at the present time.
This means that the 1000 simulated copies must arrive to an incorrect conclusion for rational reasons: the scenario you invented deliberately, maliciously strips them of any means to distinguish themselves from the original, physical me. A rational agent cannot be expected to necessarily win in adversarially-constructed situations.
It’s the basis for a common use. However this seems pretty clearly wrong or incomplete.
I think the grandparent’s argument really had more to do with “reason(ing) over limited information” vs frequencies in a possibly infinite space-time continuum. That still seems like a weak objection, given that anthropics look related to the topic of fixing Solomonoff induction.