Here’s a simple argument that simulating universes based on Turing machine number can give manipulated results.
Say we lived in a universe much like this one, except that:
The universe is deterministic
It’s simulated by a very short Turing machine
It has a center, and
That center is actually nearby! We can send a rocket to it.
So we send a rocket to the center of the universe and leave a plaque saying “the answer to all your questions is Spongebob”. Now any aliens in other universes that simulate our universe and ask “what’s in the center of that universe at time step 10^1000?” will see the plaque, search elsewhere in our universe for the reference, and watch Spongebob. We’ve managed to get aliens outside our universe to watch Spongebob.
I feel like it would be helpful to speak precisely about the universal prior. Here’s my understanding.
It’s a partial probability distribution over bit strings. It gives a non-zero probability to every bit string, but these probabilities add up to strictly less than 1. It’s defined as follows:
That is, describe Turing machines by a binary code
, and assign each one a probability based on the length of its code, such that those probabilities add up to exactly 1. Then magically run all Turing machines “to completion”. For those that halt leaving a bitstring
on their tape, attribute the probability of that Turing machine to that bitstring
. Now we have a probability distribution over bitstring
s, though the probabilities add up to less than one because not all of the Turing machines halted.
You cannot compute this probability distribution, but you can compute lower bounds on the probabilities of its bitstrings. (The Nth lower bound is the probability distribution you get from running the first N TMs for N steps.)
Call a TM that halts poisoned if its output is determined as follows:
The TM simulates a complex universe full of intelligent life, then selects a tiny portion of that universe to output, erasing the rest.
That intelligent life realizes this might happen, and writes messages in many places that could plausibly be selected.
It works, and the TM’s output is determined by what the intelligent life it simulated chose to leave behind.
If we approximate the universal prior, the probability contribution of poisoned TMs will be precisely zero, because we don’t have nearly enough compute to simulate a poisoned TM until it halts. However, if there’s an outer universe with dramatically more compute available, and it’s approximating the universal prior using enough computational power to actually run the poisoned TMs, they’ll effect the probability distribution of the bitstrings, making bitstrings with the messages they choose to leave behind more likely.
So I think Paul’s right, actually (not what I expected when I started writing this). If you approximate the UP well enough, the distribution you see will have been manipulated.
Very curious what part of this people think is wrong.