I suspect that you and I have different concepts of what a simulation is, because you describe an agent (presumably a human being) interacting with the “simulation” in real time. In this case you are mucking up the dynamics of the simulation by introducing a factor which is not accommodated by the model, i.e. the human. The human’s reasoning is influenced by knowledge from outside the simulation.
I didn’t necessarily mean human agents. For example, this is a simulation of IPD with which non-human agents can interact with. Each step, the agents make decisions based on the current state of the simulation. If you wanted, you could have exactly the same simulation with actual humans anonymously interacting via interface terminals with a server running the simulation. On the other hand, this is a non-simulation of the same problem because it lacks actual agents that would interact with it. It’s just a calculation, although an accurate one.
In general, by “simulation” I mean a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on.
Perhaps I am answering a question other than the one you are asking, but: Every exercise in simulation is an exercise in evaluating which modeling concerns are relevant to the system in question, and then accounting for those factors up to a desired level of accuracy.
That is more or less the question I am asking, and evaluating which modeling concerns are relevant to the system in question is the crucial part. But how can we be certain to have made a correct analogy or simplification? It’s easy to tell this is not the case if the end results differ, but if those are what we want to learn then we need a different approach.
Is it possible to simulate Omega, for example? Like the mentioned repeated coin toss, except that we would need to prove that our simulation does in fact in all cases lead to the same decisions that an actual Omega would. Or what if we need statistically significant results from a single agent of a one-shot problem, and we can’t memory-wipe the agent? Etc.
I didn’t necessarily mean human agents. For example, this is a simulation of IPD with which non-human agents can interact with. Each step, the agents make decisions based on the current state of the simulation. If you wanted, you could have exactly the same simulation with actual humans anonymously interacting via interface terminals with a server running the simulation. On the other hand, this is a non-simulation of the same problem because it lacks actual agents that would interact with it. It’s just a calculation, although an accurate one.
In general, by “simulation” I mean a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on.
That is more or less the question I am asking, and evaluating which modeling concerns are relevant to the system in question is the crucial part. But how can we be certain to have made a correct analogy or simplification? It’s easy to tell this is not the case if the end results differ, but if those are what we want to learn then we need a different approach.
Is it possible to simulate Omega, for example? Like the mentioned repeated coin toss, except that we would need to prove that our simulation does in fact in all cases lead to the same decisions that an actual Omega would. Or what if we need statistically significant results from a single agent of a one-shot problem, and we can’t memory-wipe the agent? Etc.