One intuitive way this case could work out, is if the SUDT could say “Ok, I’m in this Earth. And these Earthians consider themselves ‘the same as’ (or close enough) the alt-Earthians from the world where I’m actually inside a simulation that Omega is running to predict what I would do; so, though I’m only taking orders from these Earthians, I still want to act timelessly in this case”. This might be sort of vacuous, since it’s just referring back to the humans’s intuitions about decision theory (what they consider “the same as” themselves) rather than actually using the AI to do the decision theory, or making the decision theory explicit. But at least it sort of uses some of the AI’s intelligence to apply the humans’s intuitions across more lines of hypothetical reasoning than the humans could do by themselves.
One intuitive way this case could work out, is if the SUDT could say “Ok, I’m in this Earth. And these Earthians consider themselves ‘the same as’ (or close enough) the alt-Earthians from the world where I’m actually inside a simulation that Omega is running to predict what I would do; so, though I’m only taking orders from these Earthians, I still want to act timelessly in this case”. This might be sort of vacuous, since it’s just referring back to the humans’s intuitions about decision theory (what they consider “the same as” themselves) rather than actually using the AI to do the decision theory, or making the decision theory explicit. But at least it sort of uses some of the AI’s intelligence to apply the humans’s intuitions across more lines of hypothetical reasoning than the humans could do by themselves.