You are right to be confused. The idea that the simulators would necessarily have human-like motives can only be justified on anthropocentric grounds—whatever is out there, it must be like us.
Anything capable of running us as a simulation might exist in any arbitrarily strange physical environment that allowed enough processing power for the job. There is no basis for the assumption that simulators would have humanly comprehensible motives or a similar physical environment.
The simulation problem requires that we think about our entire perceived universe as a single point in possible-universe-space, and it is not possible to extrapolate from this one point.
It confuses me slightly that, from superficial glances, the discussion there and in threads like this one focuses on “ancestor” simulations, rather than simulations run by five-dimensional cephalopods. Ryan North got it right when he had T-Rex say “and not necessarily our own”, but then he seems to get confused when he says “a 1:1 simulation of a universe wouldn’t work”—why not?
Personally, I like Wei Dai’s conclusion that we both are and aren’t in a simulation.
You are right to be confused. The idea that the simulators would necessarily have human-like motives can only be justified on anthropocentric grounds—whatever is out there, it must be like us.
Anything capable of running us as a simulation might exist in any arbitrarily strange physical environment that allowed enough processing power for the job. There is no basis for the assumption that simulators would have humanly comprehensible motives or a similar physical environment.
The simulation problem requires that we think about our entire perceived universe as a single point in possible-universe-space, and it is not possible to extrapolate from this one point.