No. The simulation needs to imitate the null hypothesis (what we understand as reality), otherwise it’s falsified. Therefore, it has to be computing every part of the null universe visible to the AI. In particular, it has to compute the AI responding to the user responding to the AI. So, it’s not possible for the attacker to make the user-AI loop less tight.
Yes, I had understood that, but this is only the case in the limit when the AI is completely certain about every minute detail about its immediate physical reality, right? Otherwise, as in my above example, the simulator could introduce microscopic variations (wherever the AI isn’t yet completely certain about reality, for instance in some parts of the user’s brain) which subtly alter reality in such a way that the information between AI and user from counterfactual actions takes longer to arrive. Or am I missing something?
The variety of attacks doesn’t imply the impossibility of defending from them.
If the information takes a little longer to arrive, then the user will still be inside the threshold.
A more concerning problem is, what if the simulation only contains a coarse grained simulation of the user s.t. it doesn’t register as an agent. To account for this, we might need to define a notion of “coarse grained agent” and allow such entities to be candidate users. Or, maybe any coarse grained agent has to be an actual agent with a similar loss function, in which case everything works out on its own. These are nuances that probably require uncovering more of the math to understand properly.
Oh, so it seems we need a coarse grained user (a vague enough physical realization of the user) for threshold problems to arise. I understand now, thank you again!
Yes, I had understood that, but this is only the case in the limit when the AI is completely certain about every minute detail about its immediate physical reality, right? Otherwise, as in my above example, the simulator could introduce microscopic variations (wherever the AI isn’t yet completely certain about reality, for instance in some parts of the user’s brain) which subtly alter reality in such a way that the information between AI and user from counterfactual actions takes longer to arrive. Or am I missing something?
You’re right, thank you!
If the information takes a little longer to arrive, then the user will still be inside the threshold.
A more concerning problem is, what if the simulation only contains a coarse grained simulation of the user s.t. it doesn’t register as an agent. To account for this, we might need to define a notion of “coarse grained agent” and allow such entities to be candidate users. Or, maybe any coarse grained agent has to be an actual agent with a similar loss function, in which case everything works out on its own. These are nuances that probably require uncovering more of the math to understand properly.
Oh, so it seems we need a coarse grained user (a vague enough physical realization of the user) for threshold problems to arise. I understand now, thank you again!