I think I basically agree, though I am pretty uncertain. You’d basically have to simulate not just the other being, but also the other being simulating you, with a certain fidelity. In my post I posed the scenario where the other being is watching you through an ASI simulation, and so it is much more computationally easier for them to simulate you in their head, but this means you have to simulate what the other being is thinking as well as what it is seeing. Simply modelling the being as thinking “I will torture him for X years if he doesn’t do action Y” is an oversimplification since you also have to expand out the “him” as “a simulation of you” in very high detail.
Therefore, I think it is still extremely computation-intensive for us to simulate the being simulating us.
Huh, I thought that many people supported both a Tegmark IV multiverse as well as a Bayesian interpretation of probability theory, yet you list them as opposite approaches?
I suppose my current philosophy is that the Tegmark IV multiverse does exist, and probability refers to the credence I should lend to each possible world that I could be embedded in (this assumes that “I” am localized to only one possible world). This seems to incorporate both of the approaches that you listed as “opposite”.