https://arbital.com/p/cev/ : “If any hypothetical extrapolated person worries about being checked, delete that concern and extrapolate them as though they didn’t have it. This is necessary to prevent the check itself from having a UDT influence on the extrapolation and the actual future.”
Our altruism (and many other emotions) are evolutionarily just an acausal reaction to the worry that we’re being simulated by other humans.
It seems like a jerk move to punish someone for being self-aware enough to replace their emotions by the decision-theoretic considerations they evolved to approximate.
And unnecessary! For if they behave nicely when checked because they worry they’re being checked, they should also behave nicely when unchecked.
I think (given my extremely limited understanding of this stuff) this is to prevent UDT agents from fooling the people simulating them by recognizing that they’re in a simulation.
https://arbital.com/p/cev/ : “If any hypothetical extrapolated person worries about being checked, delete that concern and extrapolate them as though they didn’t have it. This is necessary to prevent the check itself from having a UDT influence on the extrapolation and the actual future.”
Our altruism (and many other emotions) are evolutionarily just an acausal reaction to the worry that we’re being simulated by other humans.
It seems like a jerk move to punish someone for being self-aware enough to replace their emotions by the decision-theoretic considerations they evolved to approximate.
And unnecessary! For if they behave nicely when checked because they worry they’re being checked, they should also behave nicely when unchecked.
I think (given my extremely limited understanding of this stuff) this is to prevent UDT agents from fooling the people simulating them by recognizing that they’re in a simulation.
IE, you want to ignore the following code:
If (inOmegasHead){
oneBox;
} else{
twoBox}