So, the standard Bayesian analogue of Solomonoff induction is to put a complexity prior over computable predictions about future sensory inputs. If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs. I think it is unfounded substrate chauvinism to think that your sensory inputs are less complicated to specify than those of an uploaded copy of yourself.
If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs.
Firstly, this isn’t a Solomonoff-analogous prior. It is the Solomonoff prior. Solomonoff Induction is Bayesian.
Secondly, my objection is that in all circumstances, if right-now-me does not possess actual information about uploaded or simulated copies of myself, then the simplest explanation for physically-explicable sensory inputs (ie: sensory inputs that don’t vary between physical and simulated copies), the explanation with the lowest Kolmogorov complexity, is that I am physical and also the only copy of myself in existence at the present time.
This means that the 1000 simulated copies must arrive to an incorrect conclusion for rational reasons: the scenario you invented deliberately, maliciously strips them of any means to distinguish themselves from the original, physical me. A rational agent cannot be expected to necessarily win in adversarially-constructed situations.
So, the standard Bayesian analogue of Solomonoff induction is to put a complexity prior over computable predictions about future sensory inputs. If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs. I think it is unfounded substrate chauvinism to think that your sensory inputs are less complicated to specify than those of an uploaded copy of yourself.
Firstly, this isn’t a Solomonoff-analogous prior. It is the Solomonoff prior. Solomonoff Induction is Bayesian.
Secondly, my objection is that in all circumstances, if right-now-me does not possess actual information about uploaded or simulated copies of myself, then the simplest explanation for physically-explicable sensory inputs (ie: sensory inputs that don’t vary between physical and simulated copies), the explanation with the lowest Kolmogorov complexity, is that I am physical and also the only copy of myself in existence at the present time.
This means that the 1000 simulated copies must arrive to an incorrect conclusion for rational reasons: the scenario you invented deliberately, maliciously strips them of any means to distinguish themselves from the original, physical me. A rational agent cannot be expected to necessarily win in adversarially-constructed situations.