It seems that you get similar questions as a natural outgrowth of simple computational models of thought. E.g. if one performs Solomonoff induction on the stream of camera inputs to a robot, what kind of short programs will dominate the probability distribution over the next input? Not just programs that simulate the physics of our universe: one would also need additional code to “read off” the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff. Using this framework you can pose questions like “if the camera is expected to be rebuilt using different but functionally equivalent materials, will his change the inputs Solomonoff induction predicts?” or “if the camera is about to be duplicated, which copy’s inputs will be predicted by Solomonoff induction?”
If we go beyond Solomonoff induction to allow actions, then you get questions that map pretty well to debates about “free will.”
one would also need additional code to “read off” the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff.
I don’t understand why the additional code looks like epiphenomenal mind-stuff. Care to explain?
I take Carl to meaning that: the program corresponding to ‘universe A simulating universe B and I am in universe B’ is strictly more complex than ‘I am in universe B’ while also predicting all the same observations, and so the ‘universe A simulating universe B’ part of the program makes no difference in the same way that mental epiphenomena make no difference—they predict you will make the same observations, while being strictly more complex.
the program corresponding to ‘universe A simulating universe B and I am in universe B’ is strictly more complex than ‘I am in universe B’ while also predicting all the same observations, and so the ‘universe A simulating universe B’ part of the program makes no difference in the same way that mental epiphenomena make no difference—they predict you will make the same observations, while being strictly more complex.
True, but, just as a reminder, that’s not the position we’re in. There are other (plausibly necessary) parts of our world model that could give us the implication “universe A simulates us” “for free”, just as we get “the electron that goes beyond our cosmological horizon keeps existing” is an implication we get “for free” as a result of minimal models of physics.
In this case (per the standard Simulation Argument), the need to resolve the question of “what happens in civilizations that can construct virtual worlds indistinguishable from non-virtual worlds” can force us to posit parts of a (minimal) model that then imply the existence of universe A.
The code simulating a physical universe doesn’t need to make any reference to which brain or camera in the simulation is being “read off” to provide the sensory input stream. The additional code takes the simulation, which is a complete picture of the world according to the laws of physics as they are seen by the creatures in the simulation, and outputs a sensory stream. This function is directly analogous to what dualist/epiphenomenalist philosopher of mind David Chalmers calls “psychophysical laws.”
I don’t know if this insight is originally yours or not, but thank you for it. It’s like you just gave me a piece of the puzzle I was missing (even if I still don’t know where it fits).
Oh wow… I had been planning on writing a discussion post on essentially this topic. One quick question—if you have figured out the shortest program that will generate the camera data, is there a non-arbitrary way we can decide which parts of the program correspond to “physics of our universe” and which parts correspond to “reading off camera’s data stream within universe”?
It seems that you get similar questions as a natural outgrowth of simple computational models of thought. E.g. if one performs Solomonoff induction on the stream of camera inputs to a robot, what kind of short programs will dominate the probability distribution over the next input? Not just programs that simulate the physics of our universe: one would also need additional code to “read off” the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff. Using this framework you can pose questions like “if the camera is expected to be rebuilt using different but functionally equivalent materials, will his change the inputs Solomonoff induction predicts?” or “if the camera is about to be duplicated, which copy’s inputs will be predicted by Solomonoff induction?”
If we go beyond Solomonoff induction to allow actions, then you get questions that map pretty well to debates about “free will.”
I don’t understand why the additional code looks like epiphenomenal mind-stuff. Care to explain?
I take Carl to meaning that: the program corresponding to ‘universe A simulating universe B and I am in universe B’ is strictly more complex than ‘I am in universe B’ while also predicting all the same observations, and so the ‘universe A simulating universe B’ part of the program makes no difference in the same way that mental epiphenomena make no difference—they predict you will make the same observations, while being strictly more complex.
This seems to be talking about something entirely different.
True, but, just as a reminder, that’s not the position we’re in. There are other (plausibly necessary) parts of our world model that could give us the implication “universe A simulates us” “for free”, just as we get “the electron that goes beyond our cosmological horizon keeps existing” is an implication we get “for free” as a result of minimal models of physics.
In this case (per the standard Simulation Argument), the need to resolve the question of “what happens in civilizations that can construct virtual worlds indistinguishable from non-virtual worlds” can force us to posit parts of a (minimal) model that then imply the existence of universe A.
Ah, ok, that makes sense. Thanks!
The code simulating a physical universe doesn’t need to make any reference to which brain or camera in the simulation is being “read off” to provide the sensory input stream. The additional code takes the simulation, which is a complete picture of the world according to the laws of physics as they are seen by the creatures in the simulation, and outputs a sensory stream. This function is directly analogous to what dualist/epiphenomenalist philosopher of mind David Chalmers calls “psychophysical laws.”
I don’t know if this insight is originally yours or not, but thank you for it. It’s like you just gave me a piece of the puzzle I was missing (even if I still don’t know where it fits).
Oh wow… I had been planning on writing a discussion post on essentially this topic. One quick question—if you have figured out the shortest program that will generate the camera data, is there a non-arbitrary way we can decide which parts of the program correspond to “physics of our universe” and which parts correspond to “reading off camera’s data stream within universe”?