Right, but this is exactly the insight of this post put another way. The possibility of an Omega that rewards eg ADT is discussed in Eliezer’s TDT paper. He sets out an idea of a “fair” test, which evaluates only what you do and what you are predicted to do, not what you are. What’s interesting about this is that this is a “fair” test by that definition, yet it acts like an unfair test.
Because it’s a fair test, it doesn’t matter whether Omega thinks TDT and TDT-prime are the same—what matters is whether TDT-prime thinks so.
No, not even by Eliezer’s standard, because TDT is not given the same problem than other decision theories.
As stated in comments below, everyone but TDT have the information “I’m not in the simulation” (or more precisely, in one of the simulations of the infinite regress that is implied by Omega’s formulation). The reason TDT does not have this extra piece of information comes from the fact that it is TDT, not from any decision it may make.
This variation of the problem was invented in the follow-up post (I think it was called “Sneaky strategies for TDT” or something like that:
Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still two-boxes (because that’s what CDT does). In this case, even though both agents have an equal chance of being simulated, CDT out-performs TDT (average payoffs of 500500 vs. 500000) - CDT takes advantage of TDT’s prudence and TDT suffers for CDT’s lack of it. Notice also that TDT cannot do better by behaving like CDT (both would get payoffs of 1000). This shows that the class of problems we’re concerned with is not so much “fair” vs. “unfair”, but more like “those problem on which the best I can do is not necessarily the best anyone can do”. We can call it “fairness” if we want, but it’s not like Omega is discriminating against TDT in this case.
This is not a zero-sum game. CDT does not outperform TDT here. It just makes a stupid mistake, and happens to pay it less dearly than TDT
Let’s say Omega submit the same problem to 2 arbitrary decision theories. Each will either 1-box or 2-box. Here is the average payoff matrix:
Both a and b 1-box → They both get the million
Both a and b 2-box → They both get 1000 only.
One 1-boxes, the other 2-boxes → the 1-boxer gets half a million, the other gets 5000 more.
Clearly, 1 boxing still dominates 2-boxing. Whatever the other does, you personally get about half a million more by 1-boxing. TDT may have less utility than CDT for 1-boxing, but CDT is still stupid here, while TDT is not.
He sets out an idea of a “fair” test, which evaluates only what you do and what you are predicted to do, not what you are.
Two questions: First, how does is this distinction justified? What a decision theory is is a strategy for responding to decision tasks and simulating agents performing the right decision tasks tells you what kind of decision theory they’re using. Why does it matter if it’s done implicitly (as in Newcomb’s discrimination against CDT) or explicitly. And second why should we care about it? Why is it important for a decision theory to pass fair tests but not unfair tests?
Why is it important for a decision theory to pass fair tests but not unfair tests?
Well, on unfair tests a decision theory still needs to do as well as possible. If we had a version of the original Newcomb’s problem, with the one difference that a CDT agent gets $1billion just for showing up, it’s still incumbent upon a TDT agent to walk away with $1000000 rather than $1000. The “unfair” class of problems is that class where “winning as much as possible” is distinct from “winning the most out of all possible agents”.
Real-world unfair tests could matter, though it’s not clear if there are any. However, hypothetical unfair tests aren’t very informative about what is a good decision theory, because it’s trivial to cook one up that favours one theory and disfavours another. I think the hope was to invent a decision theory that does well on all fair tests; the example above seems to show that may not be possible.
Not exactly. Because the problem statement says that it simulates “TDT”, if you were to expand the problem statement out into code it would have to contain source code to a complete instantiation of TDT. When the problem statement is run, TDT or TDT-prime can look at that instantiation and compare it to its own source code. TDT will see that they’re the same, but TDT-prime will notice that they are different, and thereby infer that it is not the simulated copy. (Any difference whatsoever is proof of this.)
Consider an alternative problem. Omega flips a coin, and asks you to guess what it was, with a prize if you guess correctly. If the coin was heads, he shows you a piece of paper with TDT’s source code. If the coin was tails, he shows you a piece of paper with your source code, whatever that is.
I’m not sure the part about comparing source code is correct. TDT isn’t supposed to search for exact copies of itself, it’s supposed to search for parts of the world that are logically equivalent to itself.
The key thing is the question as to whether it could have been you that has been simulated. If all you know is that you’re a TDT agent and what Omega simulated is a TDT agent, then it could have been you. Therefore you have to act as if your decision now may either real or simulated. If you know you are not what Omega simulated (for any reason), then you know that you only have to worry about the ‘real’ decision.
Suppose that Omega doesn’t reveal the full source code of the simulated TDT agent, but just reveals enough logical facts about the simulated TDT agent to imply that it uses TDT. Then the “real” TDT Prime agent cannot deduce that it is different.
Yes. I think that as long as there is any chance of you being the simulated agent, then you need to one box. So you one box if Omega tells you ‘I simulated some agent’, and one box if Omega tells you ‘I simulated an agent that uses the same decision procedure as you’, but two box if Omega tells you ‘I simulated an agent that had a different copywrite comment in its source code to the comment in your source code’.
This is just a variant of the ‘detect if I’m in a simulation’ function that others mention. i.e. if Omega gives you access to that information in any way, you can two box. Of course, I’m a bit stuck on what Omega has told the simulation in that case. Has Omega done an infinite regress?
Right, but this is exactly the insight of this post put another way. The possibility of an Omega that rewards eg ADT is discussed in Eliezer’s TDT paper. He sets out an idea of a “fair” test, which evaluates only what you do and what you are predicted to do, not what you are. What’s interesting about this is that this is a “fair” test by that definition, yet it acts like an unfair test.
Because it’s a fair test, it doesn’t matter whether Omega thinks TDT and TDT-prime are the same—what matters is whether TDT-prime thinks so.
No, not even by Eliezer’s standard, because TDT is not given the same problem than other decision theories.
As stated in comments below, everyone but TDT have the information “I’m not in the simulation” (or more precisely, in one of the simulations of the infinite regress that is implied by Omega’s formulation). The reason TDT does not have this extra piece of information comes from the fact that it is TDT, not from any decision it may make.
Right, and this is an unfairness that Eliezer’s definition fails to capture.
At this point, I need the text of that definition.
The definition is in Eliezer’s TDT paper although a quick grep for “fair” didn’t immediately find the definition.
This variation of the problem was invented in the follow-up post (I think it was called “Sneaky strategies for TDT” or something like that:
Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still two-boxes (because that’s what CDT does). In this case, even though both agents have an equal chance of being simulated, CDT out-performs TDT (average payoffs of 500500 vs. 500000) - CDT takes advantage of TDT’s prudence and TDT suffers for CDT’s lack of it. Notice also that TDT cannot do better by behaving like CDT (both would get payoffs of 1000). This shows that the class of problems we’re concerned with is not so much “fair” vs. “unfair”, but more like “those problem on which the best I can do is not necessarily the best anyone can do”. We can call it “fairness” if we want, but it’s not like Omega is discriminating against TDT in this case.
This is not a zero-sum game. CDT does not outperform TDT here. It just makes a stupid mistake, and happens to pay it less dearly than TDT
Let’s say Omega submit the same problem to 2 arbitrary decision theories. Each will either 1-box or 2-box. Here is the average payoff matrix:
Both a and b 1-box → They both get the million
Both a and b 2-box → They both get 1000 only.
One 1-boxes, the other 2-boxes → the 1-boxer gets half a million, the other gets 5000 more.
Clearly, 1 boxing still dominates 2-boxing. Whatever the other does, you personally get about half a million more by 1-boxing. TDT may have less utility than CDT for 1-boxing, but CDT is still stupid here, while TDT is not.
Two questions: First, how does is this distinction justified? What a decision theory is is a strategy for responding to decision tasks and simulating agents performing the right decision tasks tells you what kind of decision theory they’re using. Why does it matter if it’s done implicitly (as in Newcomb’s discrimination against CDT) or explicitly. And second why should we care about it? Why is it important for a decision theory to pass fair tests but not unfair tests?
Well, on unfair tests a decision theory still needs to do as well as possible. If we had a version of the original Newcomb’s problem, with the one difference that a CDT agent gets $1billion just for showing up, it’s still incumbent upon a TDT agent to walk away with $1000000 rather than $1000. The “unfair” class of problems is that class where “winning as much as possible” is distinct from “winning the most out of all possible agents”.
Real-world unfair tests could matter, though it’s not clear if there are any. However, hypothetical unfair tests aren’t very informative about what is a good decision theory, because it’s trivial to cook one up that favours one theory and disfavours another. I think the hope was to invent a decision theory that does well on all fair tests; the example above seems to show that may not be possible.
Not exactly. Because the problem statement says that it simulates “TDT”, if you were to expand the problem statement out into code it would have to contain source code to a complete instantiation of TDT. When the problem statement is run, TDT or TDT-prime can look at that instantiation and compare it to its own source code. TDT will see that they’re the same, but TDT-prime will notice that they are different, and thereby infer that it is not the simulated copy. (Any difference whatsoever is proof of this.)
Consider an alternative problem. Omega flips a coin, and asks you to guess what it was, with a prize if you guess correctly. If the coin was heads, he shows you a piece of paper with TDT’s source code. If the coin was tails, he shows you a piece of paper with your source code, whatever that is.
I’m not sure the part about comparing source code is correct. TDT isn’t supposed to search for exact copies of itself, it’s supposed to search for parts of the world that are logically equivalent to itself.
The key thing is the question as to whether it could have been you that has been simulated. If all you know is that you’re a TDT agent and what Omega simulated is a TDT agent, then it could have been you. Therefore you have to act as if your decision now may either real or simulated. If you know you are not what Omega simulated (for any reason), then you know that you only have to worry about the ‘real’ decision.
Suppose that Omega doesn’t reveal the full source code of the simulated TDT agent, but just reveals enough logical facts about the simulated TDT agent to imply that it uses TDT. Then the “real” TDT Prime agent cannot deduce that it is different.
Yes. I think that as long as there is any chance of you being the simulated agent, then you need to one box. So you one box if Omega tells you ‘I simulated some agent’, and one box if Omega tells you ‘I simulated an agent that uses the same decision procedure as you’, but two box if Omega tells you ‘I simulated an agent that had a different copywrite comment in its source code to the comment in your source code’.
This is just a variant of the ‘detect if I’m in a simulation’ function that others mention. i.e. if Omega gives you access to that information in any way, you can two box. Of course, I’m a bit stuck on what Omega has told the simulation in that case. Has Omega done an infinite regress?
That’s an interesting way to look at the problem. Thanks!