Specifically, if there’s a high chance we are in an adversary’s simulation, that’s equivalent to showing that you can’t actually win. We are no more able to deal with such simulators than we are able to deal with real life Avengers or Justice League coming from the comics to attack us.
Thus, the exercise is pointless: no AI safety proposal could survive such forces.
You might not have understood my above comment. A simulation hypothesis having high credence (let alone being the case) is not necessary for acausal attacks to be a problem for PreDCA. That is, this worry is independent of whether we actually live in a simulation (and whether you know that).
Specifically, if there’s a high chance we are in an adversary’s simulation, that’s equivalent to showing that you can’t actually win. We are no more able to deal with such simulators than we are able to deal with real life Avengers or Justice League coming from the comics to attack us.
Thus, the exercise is pointless: no AI safety proposal could survive such forces.
You might not have understood my above comment. A simulation hypothesis having high credence (let alone being the case) is not necessary for acausal attacks to be a problem for PreDCA. That is, this worry is independent of whether we actually live in a simulation (and whether you know that).
Thank you for clarifying things, since I got pretty confused on the acausal attack issue.