The only model which I’ve come across which seems like it handles this type of thought experiment without breaking is UDASSA.
Consider a computer which is 2 atoms thick running a simulation of you. Suppose this computer can be divided down the middle into two 1 atom thick computers which would both run the same simulation independently. We are faced with an unfortunate dichotomy: either the 2 atom thick simulation has the same weight as two 1 atom thick simulations put together, or it doesn’t.
In the first case, we have to accept that some computer simulations count for more, even if they are running the same simulation (or we have to de-duplicate the set of all experiences, which leads to serious problems with Boltzmann machines). In this case, we are faced with the problem of comparing different substrates, and it seems impossible not to make arbitrary choices.
In the second case, we have to accept that the operation of dividing the 2 atom thick computer has moral value, which is even worse. Where exactly does the transition occur? What if each layer of the 2 atom thick computer can run independently before splitting? Is physical contact really significant? What about computers that aren’t physically coherent? What two 1 atom thick computers periodically synchronize themselves and self-destruct if they aren’t synchronized: does this synchronization effectively destroy one of the copies? I know of no way to accept this possibility without extremely counter-intuitive consequences.
UDASSA implies that simulations on the 2 atom thick computer count for twice as much as simulations on the 1 atom thick computer, because they are easier to specify. Given a description of one of the 1 atom thick computers, then there are two descriptions of equal complexity that point to the simulation running on the 2 atom thick computer: one description pointing to each layer of the 2 atom thick computer. When a 2 atom thick computer splits, the total number of descriptions pointing to the experience it is simulating doesn’t change.
Thanks, I figured this wouldn’t be a new question. UDASSA seems quite unsatisfying (I have no formal argument for that claim) but the perspective is nice. I appreciate the pointer :).
The only model which I’ve come across which seems like it handles this type of thought experiment without breaking is UDASSA.
Thanks, I figured this wouldn’t be a new question. UDASSA seems quite unsatisfying (I have no formal argument for that claim) but the perspective is nice. I appreciate the pointer :).