[EDIT: I see now what Vaniver was saying, thanks to the diagram in xer reply.] Try writing it out for the Prisoner’s Dilemma in such a way that a CDT agent would cooperate with itself in the strategy-strategy game (I mean, such that they would each output a strategy that makes them both cooperate in the original PD), and you’ll see the problem. You need to eliminate logically impossible squares of the matrix, not just add new rows.
So, let’s consider this with three strategies: 1) CooperateBot, 2) DefectBot, 3) CliqueBot.
We now get a table that looks like this (notice the letters are outcomes, not actions):
X has control over which row they get, and Y has control over which column they get. The two Nash equilibria are (2,2) and (3,3). Conveniently, (2,2) and (2,3) have the same result, and so it looks like it’ll be easy to figure out that (3,3) is the better Nash equilibrium.
[EDIT: The following is false.] A clever CDT would be able to act like TDT if it considered, not the choice of whether to output C or D, but the choice of which mathematical object to output (because it could output a mathematical object that evaluates to C or D in a particular way depending on the code of Y—this gives it the option of genuinely acting like TDT would).
This has the interesting conclusion that even without the benefit of self-modification, a CDT agent with a good model of the world ends up acting more like TDT than traditional game theorists would expect. (Another example of this is here.) The version of CDT in the last post, contrariwise, is equipped with a very narrow model of the world and of its options. [End falsehood.]
I think these things are fascinating, but I think it’s important to show that you can get TDT behavior without incorporating anthropic reasoning, redefinition of its actions, or anything beyond a basic kind of framework that human beings know how to program.
(By the way, I wouldn’t call option 3 CliqueBot, because CliqueBots as I defined them have problems mutually cooperating with anything whose outputs aren’t identical to theirs. I think it’s better for Option 3 to be the TDT algorithm defined in the post.)
It seems to come up all the time that people aren’t aware that CDT with a sufficiently good world model (a sufficiently accurate causal graph) is the same as TDT, even though this has been known for years. If you could address that somewhere in your sequence I think you’d save a lot of people a lot of time—it’s the most common objection to standard discourse about decision theory that I’ve seen.
It seems to come up all the time that people aren’t aware that CDT with a sufficiently good world model (a sufficiently accurate causal graph) is the same as TDT
CDT leaves the money on the ground? Not unless the “sufficiently good world model” isn’t so much “sufficiently good” as it is an artificial hack that compensates for bad decision making by twisting what causal graphs are supposed to mean.
This has the interesting conclusion that even without the benefit of self-modification, a CDT agent with a good model of the world ends up acting more like TDT than traditional game theorists would expect.
This is a pretty common feature of comparisons between decision theories: different outcomes generally require different assumptions.
I think these things are fascinating, but I think it’s important to show that you can get TDT behavior without incorporating anthropic reasoning, redefinition of its actions, or anything beyond a basic kind of framework that human beings know how to program.
It’s not clear to me what the difference is between the TDT algorithm in your post and the method I’ve described. You need some method of determining what the outcome pair is from strategy pair, and the inference module can (hopefully) do that. The u_f that you use is the utility of the X outcome corresponding to the best Y outcome in row f, and picking the best of those corresponds to finding the best of the Nash equilibria (in the absence of bargaining problems). The only thing I don’t mention is the sanity check, but that should just be another run of the inference module.
By the way, I wouldn’t call option 3 CliqueBot, because CliqueBots as I defined them have problems mutually cooperating with anything whose outputs aren’t identical to theirs. I think it’s better for Option 3 to be the TDT algorithm defined in the post.
Sure, but does it have a short name? ProofBot?
(Notice that Y running the full TDT algorithm corresponds to there being multiple columns in the table: if you were running X against a CooperateBot, you’d just have the first column, and the Nash equilibrium would be (2,1) or (3,1). If you were running it against CliqueBot without a sanity check, there would just be the third column, and it would think (3,3) was the Nash equilibrium, but would be in for a nasty surprise when CliqueBot rejects it because of its baggage.)
It’s not clear to me what the difference is between the TDT algorithm in your post and the method I’ve described.
If you make sure to include a sanity check, then your description should do the same thing as the TDT algorithm in the post (at least on simple games; there may be a difference in bargaining situations.)
Sure, but does it have a short name? ProofBot?
I understand why you might feel it’s circular to name that row TDT, but nothing simpler (unless you count ADT/UDT as simpler) does as it does. It’s a layer more complicated than Newcomblike agents (which should also be included in your table); in order to get mutual cooperation with self and also defection against CooperateBot, it deduces whether a DefectBot or a MimicBot (C if it deduces Y=C, D otherwise) has a better outcome against Y, runs a sanity check, and if that goes through it does what the preferred strategy does.
After further review, I was wrong that CDT would be capable of making use of this to act like TDT. If CDT treats its output as separate from the rest of the causal graph (in the sense of the previous post), then it would still prefer to output an always-defect rather than a Löbian mathematical object. So it does take a different kind of agent to think of Nash equilibria among strategies.
Also, the combinatorics of enumerating strategies and looking for Nash equilibria are kind of awful: there are 16 different inputs that such a strategy has to deal with (i.e. what the opponent does against CooperateBot, DefectBot, NewcombBot and AntiNewcombBot), so there are 2^16 variant strategies in the same class. The one we call TDT is the Nash equilibrium, but it would take a long while to establish that in a naive implementation.
If CDT treats its output as separate from the rest of the causal graph (in the sense of the previous post), then it would still prefer to output an always-defect rather than a Löbian mathematical object.
When its source code is provided to its opponent, how could it be sensible to treat its output as separate from the rest of the causal graph?
Also, the combinatorics of enumerating strategies and looking for Nash equilibria are kind of awful
Sure, but it’s just as bad for the algorithm you wrote: you attempt to deduce output Y(code G, code A_i) for all A_i, which is exactly what you need to determine the Nash Equilibrium of this table. (Building the table isn’t much extra work, if it even requires more, and is done here more for illustrative than computational purposes.)
After further review, I was wrong that CDT would be capable of making use of this to act like TDT.
I am really worried that those two objections were enough to flip your position on this.
Sure, but it’s just as bad for the algorithm you wrote: you attempt to deduce output Y(code G, code A_i) for all A_i, which is exactly what you need to determine the Nash Equilibrium of this table.
No, there are four different A_i (CooperateBot, DefectBot, NewcombBot and AntiNewcombBot). 2^16 is the number of distinct agents one could write that see what Y does against the A_i and picks an action based on those responses. Just taking the maximum each time saves you from enumerating 2^16 strategies.
When its source code is provided to its opponent, how could it be sensible to treat its output as separate from the rest of the causal graph?
That is what CDT is. “Sensible” doesn’t enter into it.
(To expand on this, CDT’s way of avoiding harmful self-reference is to treat its decision as a causally separate node and try out different values for it while changing nothing else on the graph, including things that are copies of its source code. So it considers it legitimate to figure out the impact of its present decision on any agent who can see the effects of the action, but not on any agent who can predict the decision. Don’t complain to me, I didn’t make this up.)
Just taking the maximum each time saves you from enumerating 2^16 strategies.
It’s not clear to me that’s the case. If your bot and my bot both receive the same source code for Y, we both determine the correct number of potential sub-strategies Y can use, and have to evaluate each of them against each of our A_is. I make the maximization over all of Y’s substrategies explicit by storing all of the values I obtain, but in order to get the maximum you also have to calculate all possible values. (I suppose you could get a bit of computational savings by exploiting the structure of the problem, but that may not generalize to arbitrary games.)
To expand on this, CDT’s way of avoiding harmful self-reference is to treat its decision as a causally separate node and try out different values for it while changing nothing else on the graph, including things that are copies of its source code.
The basic decision here is what to write as the source code, not the action that our bot outputs, and so CDT is fine- if it modifies the source code for X, that can impact the outputs of both X and Y. There’s no way to modify the output of X without potentially modifying the output of Y in this game and I don’t see a reason for CDT to mistakenly hallucinate one.
Put another way, I don’t think I would use “causally separate”- I think I would use “unprecedented.” The influence diagram I’m drawing for this has three decision boxes (made by three different agents), all unprecedented, whose outputs are the code for X, Y, and G; all of them point to the calculation node of the inference module, and then all three codes and the inference module point to separate calculation nodes of X’s output and Y’s output, which then both point to the value node of Game Outcome. (You could have uncertainty nodes pointing to X’s output and Y’s output separately, but I’m ignoring mixed strategies for now.)
So it considers it legitimate to figure out the impact of its present decision on any agent who can see the effects of the action, but not on any agent who can predict the decision.
To the best of my knowledge, this isn’t a feature of CDT: it’s a feature of the embedded physics module used by most CDTers. If we altered Newcomb’s problem such that Omega filled the boxes after the agent made their choice, then CDTers would one-box- and so if you have a CDTer who believes that perfect prediction is equivalent to information moving backwards in time (and that that’s possible), then you have a one-boxing CDTer.
[EDIT: I see now what Vaniver was saying, thanks to the diagram in xer reply.] Try writing it out for the Prisoner’s Dilemma in such a way that a CDT agent would cooperate with itself in the strategy-strategy game (I mean, such that they would each output a strategy that makes them both cooperate in the original PD), and you’ll see the problem. You need to eliminate logically impossible squares of the matrix, not just add new rows.
So, let’s consider this with three strategies: 1) CooperateBot, 2) DefectBot, 3) CliqueBot.
We now get a table that looks like this (notice the letters are outcomes, not actions):
X has control over which row they get, and Y has control over which column they get. The two Nash equilibria are (2,2) and (3,3). Conveniently, (2,2) and (2,3) have the same result, and so it looks like it’ll be easy to figure out that (3,3) is the better Nash equilibrium.
Aha, I see now what you mean. Good insight!
[EDIT: The following is false.] A clever CDT would be able to act like TDT if it considered, not the choice of whether to output C or D, but the choice of which mathematical object to output (because it could output a mathematical object that evaluates to C or D in a particular way depending on the code of Y—this gives it the option of genuinely acting like TDT would).
This has the interesting conclusion that even without the benefit of self-modification, a CDT agent with a good model of the world ends up acting more like TDT than traditional game theorists would expect. (Another example of this is here.) The version of CDT in the last post, contrariwise, is equipped with a very narrow model of the world and of its options. [End falsehood.]
I think these things are fascinating, but I think it’s important to show that you can get TDT behavior without incorporating anthropic reasoning, redefinition of its actions, or anything beyond a basic kind of framework that human beings know how to program.
(By the way, I wouldn’t call option 3 CliqueBot, because CliqueBots as I defined them have problems mutually cooperating with anything whose outputs aren’t identical to theirs. I think it’s better for Option 3 to be the TDT algorithm defined in the post.)
It seems to come up all the time that people aren’t aware that CDT with a sufficiently good world model (a sufficiently accurate causal graph) is the same as TDT, even though this has been known for years. If you could address that somewhere in your sequence I think you’d save a lot of people a lot of time—it’s the most common objection to standard discourse about decision theory that I’ve seen.
I’ll discuss it in the final post.
CDT leaves the money on the ground? Not unless the “sufficiently good world model” isn’t so much “sufficiently good” as it is an artificial hack that compensates for bad decision making by twisting what causal graphs are supposed to mean.
Thanks!
This is a pretty common feature of comparisons between decision theories: different outcomes generally require different assumptions.
It’s not clear to me what the difference is between the TDT algorithm in your post and the method I’ve described. You need some method of determining what the outcome pair is from strategy pair, and the inference module can (hopefully) do that. The u_f that you use is the utility of the X outcome corresponding to the best Y outcome in row f, and picking the best of those corresponds to finding the best of the Nash equilibria (in the absence of bargaining problems). The only thing I don’t mention is the sanity check, but that should just be another run of the inference module.
Sure, but does it have a short name? ProofBot?
(Notice that Y running the full TDT algorithm corresponds to there being multiple columns in the table: if you were running X against a CooperateBot, you’d just have the first column, and the Nash equilibrium would be (2,1) or (3,1). If you were running it against CliqueBot without a sanity check, there would just be the third column, and it would think (3,3) was the Nash equilibrium, but would be in for a nasty surprise when CliqueBot rejects it because of its baggage.)
If you make sure to include a sanity check, then your description should do the same thing as the TDT algorithm in the post (at least on simple games; there may be a difference in bargaining situations.)
I understand why you might feel it’s circular to name that row TDT, but nothing simpler (unless you count ADT/UDT as simpler) does as it does. It’s a layer more complicated than Newcomblike agents (which should also be included in your table); in order to get mutual cooperation with self and also defection against CooperateBot, it deduces whether a DefectBot or a MimicBot (C if it deduces Y=C, D otherwise) has a better outcome against Y, runs a sanity check, and if that goes through it does what the preferred strategy does.
After further review, I was wrong that CDT would be capable of making use of this to act like TDT. If CDT treats its output as separate from the rest of the causal graph (in the sense of the previous post), then it would still prefer to output an always-defect rather than a Löbian mathematical object. So it does take a different kind of agent to think of Nash equilibria among strategies.
Also, the combinatorics of enumerating strategies and looking for Nash equilibria are kind of awful: there are 16 different inputs that such a strategy has to deal with (i.e. what the opponent does against CooperateBot, DefectBot, NewcombBot and AntiNewcombBot), so there are 2^16 variant strategies in the same class. The one we call TDT is the Nash equilibrium, but it would take a long while to establish that in a naive implementation.
When its source code is provided to its opponent, how could it be sensible to treat its output as separate from the rest of the causal graph?
Sure, but it’s just as bad for the algorithm you wrote: you attempt to deduce output Y(code G, code A_i) for all A_i, which is exactly what you need to determine the Nash Equilibrium of this table. (Building the table isn’t much extra work, if it even requires more, and is done here more for illustrative than computational purposes.)
I am really worried that those two objections were enough to flip your position on this.
No, there are four different A_i (CooperateBot, DefectBot, NewcombBot and AntiNewcombBot). 2^16 is the number of distinct agents one could write that see what Y does against the A_i and picks an action based on those responses. Just taking the maximum each time saves you from enumerating 2^16 strategies.
That is what CDT is. “Sensible” doesn’t enter into it.
(To expand on this, CDT’s way of avoiding harmful self-reference is to treat its decision as a causally separate node and try out different values for it while changing nothing else on the graph, including things that are copies of its source code. So it considers it legitimate to figure out the impact of its present decision on any agent who can see the effects of the action, but not on any agent who can predict the decision. Don’t complain to me, I didn’t make this up.)
It’s not clear to me that’s the case. If your bot and my bot both receive the same source code for Y, we both determine the correct number of potential sub-strategies Y can use, and have to evaluate each of them against each of our A_is. I make the maximization over all of Y’s substrategies explicit by storing all of the values I obtain, but in order to get the maximum you also have to calculate all possible values. (I suppose you could get a bit of computational savings by exploiting the structure of the problem, but that may not generalize to arbitrary games.)
The basic decision here is what to write as the source code, not the action that our bot outputs, and so CDT is fine- if it modifies the source code for X, that can impact the outputs of both X and Y. There’s no way to modify the output of X without potentially modifying the output of Y in this game and I don’t see a reason for CDT to mistakenly hallucinate one.
Put another way, I don’t think I would use “causally separate”- I think I would use “unprecedented.” The influence diagram I’m drawing for this has three decision boxes (made by three different agents), all unprecedented, whose outputs are the code for X, Y, and G; all of them point to the calculation node of the inference module, and then all three codes and the inference module point to separate calculation nodes of X’s output and Y’s output, which then both point to the value node of Game Outcome. (You could have uncertainty nodes pointing to X’s output and Y’s output separately, but I’m ignoring mixed strategies for now.)
To the best of my knowledge, this isn’t a feature of CDT: it’s a feature of the embedded physics module used by most CDTers. If we altered Newcomb’s problem such that Omega filled the boxes after the agent made their choice, then CDTers would one-box- and so if you have a CDTer who believes that perfect prediction is equivalent to information moving backwards in time (and that that’s possible), then you have a one-boxing CDTer.