To answer your first bullet: Solomonoff induction has many hypotheses. One class of hypotheses would continue predicting bits in accordance with what the first camera sees, and another class of hypotheses would continue predicting bits in accordance with what the second camera sees. (And there would be other hypotheses as well in neither class.) Both classes would get roughly equal probability, unless one of the cameras was somehow easier to specify than the other. For example, if there was a gigantic arrow of solid iron pointing at one camera, then maybe it would be easier to specify that one, and so it would get more probability. Bostrom discusses this a bit in Anthropic Bias, IIRC.
To answer your second bullet: Yep. To reason about Solomonoff Induction properly we need to think about what the simplest “psychophysical laws” are, since they are what SI will be using to make predictions given the physics-simulation. And depending on what they are, various transformations of the camera may or may not be supported. Plausibly, when a camera is destroyed and rebuilt with functionally similar materials, the sorts of psychophysical laws which say “you survive the process” will be more complex than the sorts which say you don’t. If so, SI would predict the end of its perceptual sequence. (Of course, after the transformation, you’d have a system which continued to use SI. So it would update away from those psychophysical laws that (in its view) just made an erroneous prediction.
To answer your third question: For SI, there is only one rule: Simpler is better. So, think about how you are not sure how to classify what counts as “drastic.” Insofar as it turns out to be hard to specify, it’s a distinction SI would not make use of. So it may well be that a rock falling on a camera would be predicted to result in doom, but it may not. It depends on what the overall simplest psychophysical laws are. (Of course, they have to also be consistent with data so far—so presumably lots of really simple psychophysical laws have already been ruled out by our data, and any real-world SI agent would have an “infancy period” where it is busy ruling out elegan, simple, and wrong hypotheses, hypotheses which are so wrong that they basically make it flail around like a human baby.)
Those are my answers at least, I’d be interested to hear if anyone disagrees.
FWIW I am excited to hear Carl was thinking about this in 2012, I ended up having similar thoughts independently a few years ago. (My version: Solomonoff Induction is solipsistic phenomenal idealism.)
My version: Solomonoff Induction is solipsistic phenomenal idealism.
I don’t understand what this means (even searching “phenomenal idealism” yields very few results on google, and none that look especially relevant). Have you written up your version anywhere, or do you have a link to explain what solipsistic phenomenal idealism or phenomenal idealism mean? (I understand solipsism and idealism already; I just don’t know how they combine and what work the “phenomenal” part is doing.)
Here’s an old term paper I wrote defending phenomenal idealism. It explains early on what it is. It’s basically Berkeley’s idealism but without God. As I characterize it, phenomenal idealism says there are minds /experiences and also physical things, but only the former are fundamental; physical things are constructs out of minds/experiences. Solipsistic phenomenal idealism just means you are the only mind (or at least, the only fundamental one—all others are constructs out of yours.)
“Phenomenal” might not be relevant, it’s just the term I was taught for the view. I’d just say “Solipsistic idealism” except that there are so many kinds of idealism that I don’t think that would be helpful.
To answer your first bullet: Solomonoff induction has many hypotheses. One class of hypotheses would continue predicting bits in accordance with what the first camera sees, and another class of hypotheses would continue predicting bits in accordance with what the second camera sees. (And there would be other hypotheses as well in neither class.) Both classes would get roughly equal probability, unless one of the cameras was somehow easier to specify than the other. For example, if there was a gigantic arrow of solid iron pointing at one camera, then maybe it would be easier to specify that one, and so it would get more probability. Bostrom discusses this a bit in Anthropic Bias, IIRC.
To answer your second bullet: Yep. To reason about Solomonoff Induction properly we need to think about what the simplest “psychophysical laws” are, since they are what SI will be using to make predictions given the physics-simulation. And depending on what they are, various transformations of the camera may or may not be supported. Plausibly, when a camera is destroyed and rebuilt with functionally similar materials, the sorts of psychophysical laws which say “you survive the process” will be more complex than the sorts which say you don’t. If so, SI would predict the end of its perceptual sequence. (Of course, after the transformation, you’d have a system which continued to use SI. So it would update away from those psychophysical laws that (in its view) just made an erroneous prediction.
To answer your third question: For SI, there is only one rule: Simpler is better. So, think about how you are not sure how to classify what counts as “drastic.” Insofar as it turns out to be hard to specify, it’s a distinction SI would not make use of. So it may well be that a rock falling on a camera would be predicted to result in doom, but it may not. It depends on what the overall simplest psychophysical laws are. (Of course, they have to also be consistent with data so far—so presumably lots of really simple psychophysical laws have already been ruled out by our data, and any real-world SI agent would have an “infancy period” where it is busy ruling out elegan, simple, and wrong hypotheses, hypotheses which are so wrong that they basically make it flail around like a human baby.)
Those are my answers at least, I’d be interested to hear if anyone disagrees.
FWIW I am excited to hear Carl was thinking about this in 2012, I ended up having similar thoughts independently a few years ago. (My version: Solomonoff Induction is solipsistic phenomenal idealism.)
I don’t understand what this means (even searching “phenomenal idealism” yields very few results on google, and none that look especially relevant). Have you written up your version anywhere, or do you have a link to explain what solipsistic phenomenal idealism or phenomenal idealism mean? (I understand solipsism and idealism already; I just don’t know how they combine and what work the “phenomenal” part is doing.)
Here’s an old term paper I wrote defending phenomenal idealism. It explains early on what it is. It’s basically Berkeley’s idealism but without God. As I characterize it, phenomenal idealism says there are minds /experiences and also physical things, but only the former are fundamental; physical things are constructs out of minds/experiences. Solipsistic phenomenal idealism just means you are the only mind (or at least, the only fundamental one—all others are constructs out of yours.)
“Phenomenal” might not be relevant, it’s just the term I was taught for the view. I’d just say “Solipsistic idealism” except that there are so many kinds of idealism that I don’t think that would be helpful.