Idle thoughts about UDASSA I: the Simulation hypothesis
I was talking to my neighbor about UDASSA the other day. He mentioned a book I keep getting recommended but never read where characters get simulated and then the simulating machine is progressively slowed down.
One would expect one wouldn’t be able to notice from inside the simulation that the simulating machine is being slowed down.
This presents a conundrum for simulation style hypotheses: if the simulation can be slowed down 100x without the insiders noticing, why not 1000x or 10^100x or quadrilliongoogolgrahamsnumberx?
If so—it would mean there is a possibly unbounded number of simulations that can be run.
Not so, says UDASSA. The simulating universe is also subject to UDASSA. This imposes a restraint on the size and time period that the simulating universe is in. Additionally, ultraslow computation is in conflict with thermodynamic decay—fighting thermodynamic decay costs descriptiong length bits which is punished by UDASSA.
I conclude that this objection to simulation hypotheses are probably answered by UDASSA.
Idle thoughts about UDASSA II: Is Uploading Death?
There is an argument that uploading doesn’t work since encoding your brain into a machine incurs a minimum amount of encoding bits. Each bit is a 2x less Subjective Reality Fluid according to UDASSA so even a small encoding cost would mean certain subjective annihiliation.
There is something that confuses me in this argument. Could it not be possible to encode one’s subjective experiences even more efficiently than in a biological body? This would make you exist MORE in an upload.
OTOH it becomes a little funky again when there are many copies as this increases the individual coding cost (but also there are more of you sooo).
In most conceptions of simulation, there is no meaning to “slowed down”, from the perspective of the simulated universe. Time is a local phenomenon in this view—it’s just a compression mechanism so the simulators don’t have to store ALL the states of the simulation, just the current state and the rules to progress it.
Note that this COULD be said of a non-simulated universe as well—past and future states are determined but not accessible, and the universe is self-discovering them by operating on the current state via physics rules. So there’s still no inside-observable difference between simulated and non-simulated universes.
UDASSA seems like anthropic reasoning to include Boltzmann Brain like conceptions of experience. I don’t put a lot of weight on it, because all anthropic reasoning requires an outside-view of possible observations to be meaningful.
And of course, none of this relates to upload, where a given sequence of experiences can span levels of simulation. There may or may not be a way to do it, but it’d be a copy, not a continuation.
The point you make in the your first paragraph is contained in the original shortform post.
The point of the post is exactly that an UDASSA-style argument can nevertheless recover something like a ‘distribution of likely slowdown factors’.
This seems quite curious.
I suggest reading Falkovich’s post on UDASSA to get a sense whats so intriguing abouy the UDASSA franework.
Idle thoughts about UDASSA I: the Simulation hypothesis
I was talking to my neighbor about UDASSA the other day. He mentioned a book I keep getting recommended but never read where characters get simulated and then the simulating machine is progressively slowed down.
One would expect one wouldn’t be able to notice from inside the simulation that the simulating machine is being slowed down.
This presents a conundrum for simulation style hypotheses: if the simulation can be slowed down 100x without the insiders noticing, why not 1000x or 10^100x or quadrilliongoogolgrahamsnumberx?
If so—it would mean there is a possibly unbounded number of simulations that can be run.
Not so, says UDASSA. The simulating universe is also subject to UDASSA. This imposes a restraint on the size and time period that the simulating universe is in. Additionally, ultraslow computation is in conflict with thermodynamic decay—fighting thermodynamic decay costs descriptiong length bits which is punished by UDASSA.
I conclude that this objection to simulation hypotheses are probably answered by UDASSA.
Idle thoughts about UDASSA II: Is Uploading Death?
There is an argument that uploading doesn’t work since encoding your brain into a machine incurs a minimum amount of encoding bits. Each bit is a 2x less Subjective Reality Fluid according to UDASSA so even a small encoding cost would mean certain subjective annihiliation.
There is something that confuses me in this argument. Could it not be possible to encode one’s subjective experiences even more efficiently than in a biological body? This would make you exist MORE in an upload.
OTOH it becomes a little funky again when there are many copies as this increases the individual coding cost (but also there are more of you sooo).
In most conceptions of simulation, there is no meaning to “slowed down”, from the perspective of the simulated universe. Time is a local phenomenon in this view—it’s just a compression mechanism so the simulators don’t have to store ALL the states of the simulation, just the current state and the rules to progress it.
Note that this COULD be said of a non-simulated universe as well—past and future states are determined but not accessible, and the universe is self-discovering them by operating on the current state via physics rules. So there’s still no inside-observable difference between simulated and non-simulated universes.
UDASSA seems like anthropic reasoning to include Boltzmann Brain like conceptions of experience. I don’t put a lot of weight on it, because all anthropic reasoning requires an outside-view of possible observations to be meaningful.
And of course, none of this relates to upload, where a given sequence of experiences can span levels of simulation. There may or may not be a way to do it, but it’d be a copy, not a continuation.
The point you make in the your first paragraph is contained in the original shortform post. The point of the post is exactly that an UDASSA-style argument can nevertheless recover something like a ‘distribution of likely slowdown factors’. This seems quite curious.
I suggest reading Falkovich’s post on UDASSA to get a sense whats so intriguing abouy the UDASSA franework.