Simulating someone isn’t a side-effect, it’s an output (at least, depending on your anthropic assumptions). There’s something weird going on here—what if Omega does the simulation inside a rotating black hole, or otherwise causally separated from you, it seems paradoxical that this simulation should have any effect on your behavior—but I think the issue is in your decision theory, not in the simulation function.
Simulating someone isn’t a side-effect, it’s an output (at least, depending on your anthropic assumptions). There’s something weird going on here—what if Omega does the simulation inside a rotating black hole, or otherwise causally separated from you, it seems paradoxical that this simulation should have any effect on your behavior—but I think the issue is in your decision theory, not in the simulation function.