Simulating Problems
Apologies for the rather mathematical nature of this post, but it seems to have some implications for topics relevant to LW. Prior to posting I looked for literature on this but was unable to find any; pointers would be appreciated.
In short, my question is: How can we prove that any simulation of a problem really simulates the problem?
I want to demonstrate that this is not as obvious as it may seem by using the example of Newcomb’s Problem. The issue here is of course Omega’s omniscience. If we construct a simulation with the rules (payoffs) of Newcomb, an Omega that is always right, and an interface for the agent to interact with the simulation, will that be enough?
Let’s say we simulate Omega’s prediction by a coin toss and repeat the simulation (without payoffs) until the coin toss matches the agent’s decision. This seems to adhere to all specifications of Newcomb and is (if the coin toss is hidden) in fact indistinguishable from it from the agent’s perspective. However, if the agent knows how the simulation works, a CDT agent will one-box, while it is assumed that the same agent would two-box in ‘real’ Newcomb. Not telling the agent how the simulation works is never a solution, so this simulation appears to not actually simulate Newcomb.
Pointing out differences is of course far easier than proving that none exist. Assuming there’s a problem we have no idea which decisions agents would make, and we want to build a real-world simulation to find out exactly that. How can we prove that this simulation really simulates the problem?
(Edit: Apparently it wasn’t apparent that this is about problems in terms of game theory and decision theory. Newcomb, Prisoner’s Dilemma, Iterated Prisoner’s Dilemma, Monty Hall, Sleeping Beauty, Two Envelopes, that sort of stuff. Should be clear now.)
Perhaps I am answering a question other than the one you are asking, but: Every exercise in simulation is an exercise in evaluating which modeling concerns are relevant to the system in question, and then accounting for those factors up to a desired level of accuracy.
If you happen to be dealing with a system simple enough to be simulated exactly—and I don’t know of any physical system for which this is possible—then it would be useful to talk about “proving” the correspondence between the simulation and the reality being modeled.
If you are dealing with a real system where you need to make approximations, my intuition says that the best you can do toward proving accuracy would be performing ample validations of the simulation against measured data and verifying that the simulation matches the data to within the expected tolerance.
I suspect that you and I have different concepts of what a simulation is, because you describe an agent (presumably a human being) interacting with the “simulation” in real time. In this case you are mucking up the dynamics of the simulation by introducing a factor which is not accommodated by the model, i.e. the human. The human’s reasoning is influenced by knowledge from outside the simulation.
I didn’t necessarily mean human agents. For example, this is a simulation of IPD with which non-human agents can interact with. Each step, the agents make decisions based on the current state of the simulation. If you wanted, you could have exactly the same simulation with actual humans anonymously interacting via interface terminals with a server running the simulation. On the other hand, this is a non-simulation of the same problem because it lacks actual agents that would interact with it. It’s just a calculation, although an accurate one.
In general, by “simulation” I mean a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on.
That is more or less the question I am asking, and evaluating which modeling concerns are relevant to the system in question is the crucial part. But how can we be certain to have made a correct analogy or simplification? It’s easy to tell this is not the case if the end results differ, but if those are what we want to learn then we need a different approach.
Is it possible to simulate Omega, for example? Like the mentioned repeated coin toss, except that we would need to prove that our simulation does in fact in all cases lead to the same decisions that an actual Omega would. Or what if we need statistically significant results from a single agent of a one-shot problem, and we can’t memory-wipe the agent? Etc.
It is more likely a simulation simulates X if it fails like X fails than if it fails in a different way.
i’m not sure I understand what you mean by ‘failing’ in regards to simulations. Could you elaborate?
If a simulation of poker looses money in a way that is similar to a game of poker, it is a good simulation because it will allow for more accurate worst-case budgeting.
You mean, if an agent loses money. And that’s the point; if the only thing you know is that an agent loses money in a simulation of poker, how can you prove the same is true for real poker?
I think Karl Popper made the the best case that there are no final proofs, only provisional, and that the way to find the more useful provisional proofs is to note how they fail and not how they succeed. A poker simulator that can tell me accurately how much I might loose is more helpful than one that tells me how much I might win. I can budget based on the former and not the later.
If you want final proofs (models, theories, simulations) the answer is there are no scientific final proofs.
I could be wrong, or perhaps i have answered a question not asked.
It’s not quite clear to me what you have in mind here. Are you envisioning this with human agents or with programs? If with humans, how will they not remember that Omega got it wrong on the past run? If with programs, what’s the purpose of the coin?
If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.
For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).
Ah, that makes sense.
Can you taboo “problem”?
If anything, I expected to be asked to taboo ‘simulation’ — by ‘problem’ I really just mean game theoretical problems such as Newcomb, Prisoner’s Dilemma, Iterated Prisoner’s Dilemma, Monty Hall, Sleeping Beauty, Two Envelopes, and so forth.
Would tabooing ‘problem’ really be helpful?
It would for me! “Problem” is an extremely broad word. I would also like it if you tabooed “simulation.”
In terms of game theory, ‘problem’ is not an extremely broad word at all, and I’m not aware of any grey areas, either. I guess you could define a game-theoretical problem as a ruleset within which agents get payoffs based on decisions they or others make. I really fail to see why you think this term that is prominently featured on LW should be tabooed.
I gave a definition for ‘simulation’ in another comment:
I’ll taboo the term if others tell me to or upvote your comment, but at present I see no need for it.
It was not obvious to me that you were talking about game-theoretic problems. “Problem” is not a word owned solely by game theorists.
It’s unclear to me what you mean by this. If a problem contains elements which are impossible to construct in real life, in what sense can a practical version be said to be identical in terms of rules, interactions, results, and so on?
I have edited my top-level post to clarify what kind of problems I mean.
For a trivial example, Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence can easily be proven because regardless of whether we allow for backwards causality and whatnot, a fair coin is always fair and even if we assume that Omega may be wrong, the probability of error must still be the same for either side of the coin, so in the end Omega is exactly as random as the coin itself no matter Omega’s actual accuracy. Of course this wouldn’t apply if the result of the coin toss was also relevant in some other way.
Okay, so right now I don’t understand what your question is. It sounds to me like “how can we prove that simulations are simulations?” given what I understand to be your definition of a simulation.
The question is: How can I prove that all possible agents decide identically whether they’re considering the simulation or the original problem?
To further illustrate the point of problem and simulation, suppose I have a tank and a bazooka and want to know whether the bazooka would make the tank blow up, but because tanks are somewhat expensive I build another, much cheaper tank lacking all parts I deem irrelevant such as tracks, crew, fire-control and so on. My model tank blows up. But how can I say with certainty that the original would blow up as well? After all, the tracks might have provided additional protection. Could I have used tracks of inferior quality for my model? Which cheaper material would have the same resistance to penetration?
Tank and bazooka are the problem, of which the tank is the impractical part that is replaced by the model tank in the simulation.
You… can’t?
This is obviously not about bazookas and tanks. If you want to know whether real tanks really blow up, you need real evidence. If you want to know whether CDT defects in PD, you don’t. You can do maths just with logic and reason, und fortunately this is 100% about maths.
You have not given me anything like a precise statement of a mathematical problem.
Here you go:
Given a problem A which is impossible or impractical in real life, find a practical problem B (called simulation) with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions in analogous states.
Solve for Newcomb or other problems at will. Bonus points for finding generalized approach.
That is not a precise statement of a mathematical problem. What do “impractical” and “practical” mean? What does “analogous” mean?
“Impractical” means that you don’t want to or can’t realize the problem in its original form, for example because it would be too expensive or because you don’t have a prison handy and can’t find any prisoner rental service.
“Practical” pretty much means the opposite, for example because it’s inexpensive or because you happen to be a prison director and are not particularly bent on interpreting the law orthodoxly.
“Analogous” basically means that if you can find isomorphisms between the set of the states of problem A and the set of the states of problem B as well as between the set of decisions of problem A and the set of decisions of problem B, then each thus mapped pair of decisions or states is called analogous if analagous decisions lead to analogous states and analogous states imply analogous decisions.
This doesn’t sound like a mathematical problem, then. It’s a modeling problem.
“It’s not a dog, it’s a poodle!”
Everything you claim to not understand or misunderstand or otherwise question is either trivial or has been answered several times already, and if A is the hypothesis that you generally don’t understand a whole lot of maths, and B is the hypothesis that you are deliberately being impertinent, then from my perspective p(A∨B) is getting rather close to 1.
I am a graduate student in mathematics, and I can point you to large quantities of evidence that hypothesis A is false. I recognize that the tone of my previous comments may have been unnecessarily antagonistic, and I apologize for that, but I genuinely don’t understand what question you’re asking. If you don’t care enough to explain it to me, that’s fine, but you should take it as at least weak Bayesian evidence that other people also won’t understand what question you’re asking.
It’s not the antagonistic tone of your comments that puts me off, it’s the way in which you seem to deliberately not understand things. For example my definition of analogous — what else could you possibly have expected in this context? No, don’t answer that.
I believe I have said everything already, but I’ll put it in a slightly different way:
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
For instance, how can we find a problem that is analogous to Newcomb, but without Omega? I have described such an analogous problem in my top-level post and demonstrated how CDT agents will in the initial state not make the analogous decision. What we’re looking for is a problem in which any imaginable agent would, and we can prove it. If we believe that such a problem cannot exist without Omega, how can we prove that?
The meaning of analogous should be very clear by now. Screw practical and impractical.
As an aside note, I don’t know what kind of stuff they teach at US grad schools, but what’s of help here is familiarity with methods of proof and a mathematical mindset rather than mathematical knowledge, except some basic game theory and decision theory. As far as I know, what I’m trying to do here is uncharted territory.
The question is how close you wanted the analogy to be.
Okay, this is clearer.
I can point you to a large body of evidence that I have all of these things.
Close enough that anything we can infer from the analogous problem must apply to the original problem as well, especially concerning the decisions agents make. I thought I said that a few times.
Does that imply it is actually clear? Do you have an approach for this? A way to divide the problem into smaller chunks? An idea how to tackle the issue of “any possible agent”?
I’ll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a “rather mathematical nature”, let alone a precise specification of a mathematical problem.
If you think that you are communicating clearly, then you are wrong. Try again.
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
You do realize that game theory is a branch of mathematics, as is decision theory? That we are trying to prove something here, not by empirical evidence, but by logic and reason alone? What do you think this is, social economics?
Your question is not stated in anything like the standard terminology of game theory and decision theory. It’s also not clear what you are asking on an informal level. What do you mean by “analogous”?
I’m not surprised you don’t understand what I’m asking when you don’t read what I write.
I did read that. It either doesn’t say anything at all, or else it trivializes the problem when you unpack it.
Also, this is not worth my time. I’m out.
What you have stated is unclear enough that I can’t recognize it as a problem in either game theory or decision theory, and meanwhile you are being very rude. Disincentivizing people who try to help you is not a good way to convince people to help you.
That’s because it’s not strictly speaking a problem in GT/DT, it’s a problem (or meta-problem if you want to call it that) about GT/DT. It’s not “which decision should agent X make”, but “how can we prove that problems A and B are identical.”
Concerning the matter of rudeness, suppose you write a post and however many comments about a mathematical issue, only for someone who doesn’t even read what you write and says he has no idea what you’re talking about to conclude that you’re not talking about mathematics. I find that rude.
The agent in Newcomb’s problem needs to know that Omega’s prediction is caused by the same factors as his actual decision. The agent does not need to know any more detail than that, but does need to know at least that much. If there were no such causal path between prediction and decision then Omega would be unable to make a reliable prediction. When there is correlation, there must, somewhere, be causation (though not necessarily in the same place as the correlation).
If the agent believes that Omega is just pretending to be able to make that prediction, but really tossed a coin, and intends only publicising the cases where the agent’s decision happened to be the same, then the agent has no reason to one-box.
If the agent believes Omega’s story, but Omega is really tossing a coin and engaging in selective reporting, then the agent’s decision may be correct on the basis of his belief, but wrong relative to the truth. Such is life.
To simulate Newcomb’s problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can’t.
I only used Newcomb as an example to show that determining whether a simulation actually simulates a problem isn’t trivial. The issue here is not finding particular simulations for Newcomb or other problems, but the general concept of correctly linking problems to simulations. As I said, it’s a rather mathematical issue. Your last statement seems the most relevant one to me:
Can we generalize this to mean “if a problem can’t exist in reality, an accurate simulation of it can’t exist either” or something along those lines? Can we prove this?
I would cast this sentence in this form, seeing that if a problem contains some infinite it’s impossibile for it to exist in reality. Can an infinite transition system be simulated by a finite transition sistem? If there’s only one which can be, then this would disprove your conjecture. The converse of course it’s not true...
I’m not sure what you mean by an infinite transition system. Are you referring to circular causality such as in Newcomb, or to an actually infinite number of states such as a variant of Sleeping Beauty in which on each day the coin is tossed anew and the experiment only ends once the coin lands heads?
Regardless, I think I have already disproven the conjecture I made above in another comment: