I was suggesting that that if the time difference wasn’t too large, the AIXI could deduce “humans plan at time 10 to press button” → “weirdness at time 10 and button pressed at time 100″. If it’s good a modelling us, it may be able to deduce our plans long before we do, and as long as the plan predates the weirdness, it can model the plan as causal.
Or if it experiences more varied situations, it might deduce “no interactions with humans for long periods” → “no weirdness”, and act in consequence.
I was suggesting that that if the time difference wasn’t too large, the AIXI could deduce “humans plan at time 10 to press button” → “weirdness at time 10 and button pressed at time 100″. If it’s good a modelling us, it may be able to deduce our plans long before we do, and as long as the plan predates the weirdness, it can model the plan as causal.
Or if it experiences more varied situations, it might deduce “no interactions with humans for long periods” → “no weirdness”, and act in consequence.