An example of a technical move forward would be a game world that is so large it must be procedurally generated, that also has the two properties that it is massively multiplayer, and that players can arbitrarily alter the environment.
You’d get the technical challenge of reconciling player-made alterations to the environment with the “untouched” version of the environment according to your generative algorithm. Then you’d get the additional challenge of sharing those changes across lots of different players in real time.
I don’t get the sense that either of the two properties (massively multiplayer and alterable environment) are a big part of this game.
If a game with all three properties (procedural generation of a large universe, massively multiplayer, and alterable environment) were to be made, it’d make me take a harder look as simulation arguments.
Another neat direction this work can go in is toward corroborating the computational feasibility of simulationism and artificial life.
If abstractions are natural then certain optimizations in physical simulation software are potentially not impossible. These optimizations would be of the type that save compute resources by computing only at those abstraction levels the inhabitants of the simulation can directly observe/measure.
If abstractions aren’t natural, then the simulation software can’t generically know what it can get away with lossily compressing wrt a given observer. Or something to that effect.