This argument is anthropomorphizing. It assumes that the purpose of the purported simulation is to model humanity. Suppose it isn’t? Suppose the purpose of the simulation is to model a universe with certain physical laws, and one of the unexpected outcomes is that intelligent technological life happens to evolve on a small rocky planet around one star out in the spiral arm of one galaxy. That could be a completely unexpected outcome, maybe even an unnoticed outcome, of a simulation with a very different purpose.
This argument is anthropomorphizing. It assumes that the purpose of the purported simulation is to model humanity. Suppose it isn’t? Suppose the purpose of the simulation is to model a universe with certain physical laws, and one of the unexpected outcomes is that intelligent technological life happens to evolve on a small rocky planet around one star out in the spiral arm of one galaxy. That could be a completely unexpected outcome, maybe even an unnoticed outcome, of a simulation with a very different purpose.