In the year 2020, cousin_it downloads some software for running predictor sims. He wants to optimize his future conduct -test which particular minor mind-hack would be the most beneficial, given his individual psychological makeup. So he runs many ancestor sims of himself poisoned with different irrational mind-hacks to see who wins in the past. I am one such sim, poisoned in an unknown manner; what should I do?
I have precommitted as strongly as I can to never run simulations of myself which are worse off than the version the simulation was based on. This might fall out as a consequence of UDT, but with a time-varying utility function I’m not really sure.
In general, self-copying and self-simulation require extreme care. They might be able to affect subjective experience in a way that goes back in time. The rules of subjective experience, if any, are non-transferrable (you can’t learn them from someone else who’s figured them out, even in principle) and might not be discoverable at all.
Humans can’t easily precommit to anything at all, and even if they could, it’d be incredibly stupid to try without thinking about it for a very very long time. I’m surprised at how many people don’t immediately see this.
I don’t believe your decision follows from UDT. If you have a short past and a long future, knowledge gained from sims may improve your future enough to pay off the sims’ suffering.
This is firmly in the realm of wild speculation and/or science fiction plot ideas. That said -
You’re right that it does not follow from UDT alone. I do think it follows from a combination of UDT with many common types of utility functions; in particular, if utility is discounted exponentially with time, or if the sims must halt and being halted is sufficiently bad.
A lot depends on what happens subjectively after the simulation is halted, or if there are sufficient resources to keep it going indefinitely. In the latter case, most simulated bad things can be easily made up for by altering the simulated universe after the useful data has been extracted. This would imply that if you are living in a sim created by your future self, your future self follows UDT, and your future self has sufficient resources, you’ll end up in a heaven of some sort. Actually, if you ever did gain the ability to run long-duration simulations of your past selves, then it seems like UDT implies you should run rescue sims of yourself.
Simulations that have to halt after a short duration are very problematic, though; if you anticipate a long life ahead of you, then your past selves probably also have a long life ahead of them, too, which would be cut short by a sim that has to halt. This would probably outweigh the benefits of any information gleaned.
I have precommitted as strongly as I can to never run simulations of myself which are worse off than the version the simulation was based on. This might fall out as a consequence of UDT, but with a time-varying utility function I’m not really sure.
In general, self-copying and self-simulation require extreme care. They might be able to affect subjective experience in a way that goes back in time. The rules of subjective experience, if any, are non-transferrable (you can’t learn them from someone else who’s figured them out, even in principle) and might not be discoverable at all.
Humans can’t easily precommit to anything at all, and even if they could, it’d be incredibly stupid to try without thinking about it for a very very long time. I’m surprised at how many people don’t immediately see this.
I don’t believe your decision follows from UDT. If you have a short past and a long future, knowledge gained from sims may improve your future enough to pay off the sims’ suffering.
This is firmly in the realm of wild speculation and/or science fiction plot ideas. That said -
You’re right that it does not follow from UDT alone. I do think it follows from a combination of UDT with many common types of utility functions; in particular, if utility is discounted exponentially with time, or if the sims must halt and being halted is sufficiently bad.
A lot depends on what happens subjectively after the simulation is halted, or if there are sufficient resources to keep it going indefinitely. In the latter case, most simulated bad things can be easily made up for by altering the simulated universe after the useful data has been extracted. This would imply that if you are living in a sim created by your future self, your future self follows UDT, and your future self has sufficient resources, you’ll end up in a heaven of some sort. Actually, if you ever did gain the ability to run long-duration simulations of your past selves, then it seems like UDT implies you should run rescue sims of yourself.
Simulations that have to halt after a short duration are very problematic, though; if you anticipate a long life ahead of you, then your past selves probably also have a long life ahead of them, too, which would be cut short by a sim that has to halt. This would probably outweigh the benefits of any information gleaned.