It’s a logical distinction, not an empirical one. Whoever you are, you are someone in particular, not someone in general.
Can’t parse.
Are these programs and results in conflict with ordinary decision theory?
Yes, UDT and CDT act differently in Newcomb’s Problem, Parfit’s Hitchhiker, symmetric PD and the like. (We currently formalize such problems along these lines.) But that seems to be obvious, maybe you were asking about something else?
Even if there are infinitely many subjective copies of you in the multiverse, it’s a matter of logic that this particular you is just one of them. You don’t get to say “I am all of them”. You-in-this-world are only in this world, by definition, even if you don’t know exactly which world this is.
Are these programs and results in conflict with ordinary decision theory?
Yes, UDT and CDT act differently in Newcomb’s Problem, Parfit’s Hitchhiker, symmetric PD and the like.
Parfit’s Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won’t keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.
I also don’t believe that a new decision theory will consistently do better than CDT on PD. If you cooperate “too much”, if you have biases towards cooperation, you will be exploited in other settings. It’s a sort of no-free-lunch principle.
Parfit’s Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won’t keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.
It should, but it doesn’t. If you get a ride to town, CDT tells you to break your promise and stiff the guy. So in order to sincerely commit yourself, you’d want to modify yourself to become an agent that follows CDT in all cases except when deciding whether to pay the guy in the end. So, strictly speaking, you aren’t a CDT agent anymore. What we want is a decision theory that won’t try to become something else.
I also don’t believe that a new decision theory will consistently do better than CDT on PD. If you cooperate “too much”, if you have biases towards cooperation, you will be exploited in other settings. It’s a sort of no-free-lunch principle.
CDT always defects in one-shot PD, right? But it’s obvious that you should cooperate with an exact copy of yourself. So CDT plus cooperating with exact copies of yourself is strictly superior to CDT in PD.
I consider it debatable whether these amendments to naive CDT—CDT plus keeping a commitment, CDT plus cooperating with yourself—really constitute a new decision theory. They arise from reasoning about the situation just a little further, rather than importing a whole new method of thought. Do TDT or UDT have a fundamentally different starting point to CDT?
Well, I’m not sure what you’re asking here. The problem that needs solving is this: We don’t have a mathematical formalism that tells us what to do and which also satisfies a bunch of criteria (like one-boxing on Newcomb’s problem, etc.) which attempt to capture the idea that “a good decision theory should win”.
When we criticize classical CDT, we are actually criticizing the piece of math that can be translated as “do the thing that, if I-here-now did it, would cause the best possible situation to come about”. There are lots of problems with this. “Reasoning about the situation” ought to go into formulating a new piece of math that has no problems. All we want is this new piece of math.
I’m only just learning that (apparently) the standard rival of causal decision theory is “evidential decision theory”. So is that the original acausal decision theory, with TDT and UDT just latecomers local to LW? As you can see I am dangerously underinformed about the preexisting theoretical landscape, but I will nonetheless state my impressions.
If I think about a “decision theory” appropriate for real-world decisions, I think about something like expected-utility maximization. There are a number of problems specific to the adoption of a EUM framework. For example, you have to establish a total order on all possible states of the world, and so you want to be sure that the utility function you construct genuinely represents your preferences. But assuming that this has been accomplished, the problem of actually maximizing expected utility turns into a problem of computation, modeling an uncertain world, and so forth.
The problems showing up in these debates about causal vs evidential and causal vs acausal seem to have a very different character. If I am making a practical decision, I expect both to use causal thinking and to rely on evidence. CDT vs EDT then sounds like a debate about which indispensable thing I can dispense with.
Another thing I notice is that the thought experiments which supposedly create problems for CDT all involve extremes that don’t actually happen. Newcomb’s problem involves a superbeing with a perfect capacity to predict your choice, Parfit’s Hitchhiker is picked up by a mind reader who absolutely knows whether you will keep a promise or not, PD against your copy assumes that you and your copy will knowably make exactly the same choice. (At least this last thought experiment is realizable, in miniature, with simple computer programs.) What happens to these problems if you remove the absolutism?
Suppose Omega or Parfit’s mindreader is right only 99% of the time. Suppose your copy only makes the same choice as you do, 99% of the time. It seems like a practically relevant decision theory (whether or not you call it CDT) should be able to deal with such situations, because they are only a variation on the usual situation in reality, where you don’t have paranormally assured 100% knowledge of other agents, and where everything is a little inferential and a little uncertain. It seem that, if you want to think about these matters, first you should see how your decision theory deals with the “99% case”, and then you should “take the limit” to the 100% case which defines the traditional thought experiment, and you should see if the recommended decisions vary continuously or discontinuously.
All these thought experiments are realizable as simple computer programs, not only PD. In fact the post I linked to shows how to implement Newcomb’s Problem.
The 99% case is not very different from the 100% case, it’s continuous. If you’re facing a 99% Omega (or even a 60% Omega) in Newcomb’s Problem, you’re still better off being a one-boxer. That’s true even if both boxes are transparent and you can see what’s in them before choosing whether to take one or two—a fact that should make any intellectually honest CDT-er stop and scratch their head.
No offense, but I think you should try to understand what’s already been done (and why) before criticizing it.
To get to the conclusion that against a 60% Omega you’re better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.
I think that’s really the original problem in disguise (it’s a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.
It would become a mind game: you’d have to explicitly model how you think Omega is making the decision.
The problem you’re facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the ‘all your behaviour’ part, because Omega is always right. But in the ‘imperfect Omega’ case you can’t.
It’s still not clear to me why playing mind games is a better strategy than just one-boxing, even in the 60% case. But I do understand your point about independence assumptions.
Start at 50% then, with Omega no better than chance. For each thought experiment, start with a null version where there’s nothing unusual and where CDT is supposed to work. Then vary the relevant parameter until there’s a problem, and understand what has changed.
That’s part of what the people who have been exploring this problem have already done, and why some posters are upset that you’re asking this without apparently having tried to get up-to-date on any of this.
I don’t see the bridge from ordinary decision problems to the thought experiments. I see extreme scenarios being constructed, and then complicated solutions being proposed just to deal with those scenarios. I don’t consider this a reliable way to arrive at the correct general form of decision theory.
You say that some people have already gone in the other direction, starting with ordinary decision problems and then slowly changing something until ordinary decision theory breaks. If so, great, and I’m sorry I missed it, but where is it? Is it on this site? Somewhere in the literature?
The analogy with relativity had occurred to me. But we could use another analogy from high-energy physics: There are a very large number of theories which have the standard model (the empirically validated part of particle physics) as their low-energy limit. We can’t just rely on high-energy thought-experiments to figure out the actual high-energy physics. We need to do some real experiments where we start low, ramp up the energy, and see what happens.
We can’t just rely on high-energy thought-experiments to figure out the actual high-energy physics.
Right. We can only use it to rule out incoherent or otherwise “clearly wrong” high-energy physics. But in this analogy, we’ve shown that CDT seems to not be optimal in this extreme case. if we can define a DT that does better than CDT in this case, and no worse in normal cases, we should use it. I don’t think TDT has been well enough defined yet to subject to all conceivable tests, but anything that is following the same kinds of principals will reproduce CDT in most cases, and do better in this case.
We need to do some real experiment where we start low, ramp up the energy, and see what happens.
Here’s where the analogy falls down—we only need to start low and ramp up the energy because of the difficulties of doing high-energy experiments. (And theory-wise, we extrapolate down from clear differences between theories at high energies to find signatures of small differences at lower energies.) If the extreme energies are accessible (and not crazily dangerous), we can just go ahead and test in that regime. Game theory is math. In math, unlike physics, there is no difference between thought experiments and real experiments. The question of applicability in everyday life is an applied economics / sociology / psychology one. How close are people or situations that appear to be screwy in this omega-like way to actually being that way?
See my other reply, or the links any others have given you, or Drescher’s handling of acausal means-end links in chapter 7 of Good and Real, which I think I did a good job summarizing here.
It sounds like I’ll have to work through this in my own fashion. As I said, I want to start with a null version, where CDT works—for example, a situation where Omega has no special knowledge and just guesses what your choice was. Obviously two-boxing is the right thing to do in that situation, CDT says so, and I assume that TDT says so too (though it would be nice to see a worked-out derivation in TDT of that conclusion). Then we give Omega some small but nonzero ability to predict what your choice is going to be. At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p). I think everyone will tell me that CDT always says p should be zero, but is that really so? I’m just not convinced that I need TDT in order to reach the obvious conclusion.
At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p).
If Omega’s correctness is independent of your thought process, the optimal strategy will be pure, not mixed. As you make Omega more accurate, at some point you switch from pure two-boxing to pure one-boxing.
Are you sure about that? If you’re right, that’s the exact transition point I’ve been looking to scrutinize. But what is the point at which you switch strategies?
cousin_it answered as I would, but I’ll go ahead and give the formal calculation anyway. If you start from an Omega accuracy rate r = 50%, that is equivalent to the case of Omega’s choice and yours being uncorrelated (causally or acausally). In that case, two boxing is optimal, and TDT and CDT both output that (as a pure strategy). As you increase r, CDT continues to output two-box, as it assigns the same optimality, while TDT will assign increasing optimality (call it TDTO, though it amounts to the same as EU) to one-boxing and decreasing optimality to two-boxing.
Solving for TDTO(one-box) > TDTO(two-box), you get that one-boxing chosen is under TDT (and optimal) whenever r > 50.05%, or whenever Omega has more than 721 nanobits of information (!!!) about your decision theory. (Note, that’s 0.000000721 bits of information.)
Viewed in this light, it should make more sense—do people never have more than 1 microbit of information about your decision theory? (Note: with less drastic differences between the outcomes, the threshold is higher.)
(I don’t think the inclusion of probabilistic strategies changes the basic point.)
I had been thinking that the only way to even approximately realize a Newcomb’s-problem situation was with computer programs. But a threshold so low makes it sound as if even a human being could qualify as a fallible Omega, and that maybe you could somehow test all this experimentally. Though even if we had human players in an experiment who were one-boxing and reaping the rewards, I’d still be very wary of supposing that the reason they were winning was because TDT is correct. If the Omega player was successfully anticipating the choices of a player who uses TDT, it suggests that the Omega player knows what TDT is. The success of one-boxing in such a situation might be fundamentally due to coordination arising from common concepts, rather than due to TDT being the right decision theory.
But first let me talk about realizing Newcomb’s problem with computer programs, and then I’ll return to the human scenario.
When I think about doing it with computer programs, two questions arise.
First question: Would an AI that was capable of understanding that it was in a Newcomb situation also be capable of figuring out the right thing to do?
In other words, do we need to include a “TDT special sauce” from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb’s problem, enough for an independent discovery of these ideas?
Second question: How does Omega get its knowledge of the player’s dispositions, and does this make any difference to the situation? (And we can also ask how the player knows that Omega has the power of prediction!)
If omega() and player() are two agents running in the same computer, the easiest way for omega() to predict player()’s behavior is just to simulate player(). omega() would then enact the game twice. First, it would start a copy of player() running, telling it (falsely) that it had predicted its choice, and then it would see the choice it made under such conditions. Then, omega() would play the game for real with the original(?) player(), now telling it (truthfully) that it has a prediction for its choice (due to the simulation of the game situation that had just been performed).
For certain types of player(), explicit simulation should not be necessary. If player() always does the same thing, completely unaffected by initial conditions and without any cognitive process, omega() can just inspect the source code. If player() has a simple decision procedure, something less than full simulation may also be sufficient. But full simulation of the game, including simulation of the beginning, where player() is introduced to the situation, should surely be sufficient, and for some cases (some complex agents) it will be necessary.
cousin_it’s scenario is a step down this path—world() corresponds to omega(), agent() to player(). But its agents, world() at least, lack the cognitive structure of real decision-makers. world() and agent() are functions whose values mimic the mutual dependency of Newcomb’s Omega and a TDT agent, and agent() has a decision procedure, though it’s just a brute-force search (and it requires access to world()’s source, which is unusual). But to really have confidence that TDT was the right approach in this situation, and that its apparent success was not just an artefact arising (e.g.) from more superficial features of the scenario, I need both omega() and player() to explicitly be agents that reason on the basis of evidence.
If we return now to the scenario of human beings playing this game with each other, with one human player being a “fallible Omega”… we do at least know that humans are agents that reason on the basis of evidence. But here, what we’d want to show is that any success of TDT among human beings actually resulted because of evidence-based cognition, rather than from (e.g.) “coordination due to common concepts”, as I suggested in the first paragraph.
In other words, do we need to include a “TDT special sauce” from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb’s problem, enough for an independent discovery of these ideas?
This is basically what EY discusses in pp. ~27-37 of the thesis he posted, where he poses it as the difference between optimality on action-determined problems (in which ordinary causal reasoning suffices to win) and optimality on decision-determined problems (on which ordinary causal reasoning loses, and you have to incorporate knowledge of “what kind of being makes this decision”).
I don’t think there’s anything especially interesting about that point, it’s just the point where the calculated expected utilities of one-boxing and two-boxing become equal.
They don’t make those decisions with “paranormally assured 100% knowledge” of my decision theory. That’s the “extreme that doesn’t actually happen”. And this is why I won’t be adopting any new paradigm of decision theory unless I can start in the middle, with situations that do happen, and move gradually towards the extremes, and see the desirability or necessity of the new paradigm that way.
As has been said many times (at least by me, definitely by many others), you don’t need 100% accuracy for the argument to hold. If Parfit’s mindreader is only 75% accurate, that still justifies choosing the pay/ cooperate / one-box option. One-boxing on newcomblike problems is simply what you get when you have a decision theory that wins in these reasonable cases, and which is continuous—and then take the limit as all the predicate variables go to what they need to be to make it Newcomb’s problem (such as making the predictor 100% accurate).
It doesn’t matter that you’ll never be in Newcomb’s problem. It doesn’t matter that you’ll never be in an epistemic state where you can justifiably believe that you are. It’s just an implication of having a good decision theory.
Part of my concern is that I’ll end up wasting time, chasing my tail in an attempt to deal with fictitious problems, when I could be working on real problems. I’m still undecided about the merits of acausal decision theories, as a way of dealing with the thought experiments, but I am really skeptical that they are relevant to anything practical, like coordination problems.
I also don’t believe that a new decision theory will consistently do better than CDT on PD. If you cooperate “too much”, if you have biases towards cooperation, you will be exploited in other settings. It’s a sort of no-free-lunch principle.
Only settings that directly reward stupidity (capricious Omega, etc). A sane DT will cooperate whenever that is most likely to give you the best result but not a single time more.
It is even possible to consider (completely arbitrary) situations in which TDT will defect while CDT will cooperate. There isn’t an inherent bias in TDT itself (just some proponents.)
I don’t know what your method is for determining what cooperation maps to for the general case, but I believe this non-PD example works: costly punishment. Do you punish a wrongdoer in a case where the costs of administering the punishment exceed the benefits (including savings from future deterrence of others), and there is no other punishment option?
I claim the following:
1) Defection → punish 2) Cooperation → not punish 3) CDT reasons that punishing will cause lower utility on net, so it does not punish. 4) TDT reasons that “If this algorithm did not output ‘punish’, the probability of this crime having happened would be higher; thus, for the action ‘not punish’, the crime’s badness carries a higher weighting than it does for the action ‘punish’.” (note: does not necessarily imply punish) 5) There exist values for the crime’s badness, punishment costs, and criminal response to expected punishment for which TDT punishes, while CDT always doesn’t. 6) In cases where TDT differs from CDT, the former has the higher EU.
Naturally, you can save CDT by positing a utility function that values punishing of wrongdoers (“sense of justice”), but we’re assuming the UF is fixed—changing it is cheating.
Not specifically. I’m just seeking general enlightenment.
What do you think of this example?
It’s bringing the features of TDT into better view for me. There’s this Greg Egan story where you have people whose brains were forcibly modified so as to make them slaves to a cause, and they rediscover autonomy by first reasoning that, because of the superhuman loyalty to the cause which the brain modification gives them, they are more reliable adherents of the cause than the nominal masters who enslaved them, and from there they proceed to reestablish the ability to set their own goals. TDT reminds me of that.
That wasn’t mockery. What stands out from your example and from the link is that TDT is supposed to do better than CDT because it refers to itself—and this is exactly the mechanism whereby the mind control victims in Quarantine achieve their freedom. I wasn’t trying to make TDT look bizarre, I was just trying for an intuitive illustration of how it works.
In the case of playing PD against a copy of yourself, I would say the thought process is manifestly very similar to Egan’s novel. Here we are, me and myself, in a situation where everything tells us we should defect. But by realizing the extent to which “we” are in control of the outcome, we find a reason to cooperate and get the higher payoff.
There’s this Greg Egan story where you have people whose brains were forcibly modified so as to make them slaves to a cause, and they rediscover autonomy by first reasoning that, because of the superhuman loyalty to the cause which the brain modification gives them, they are more reliable adherents of the cause than the nominal masters who enslaved them, and from there they proceed to reestablish the ability to set their own goals.
James H. Schmitz’s story “Puvyq bs gur Tbqf” (nearest link available; click “Contents” in upper right) has basically this situation as well; in fact, it’s the climax and resolution of the whole story, so I’ve rot13′d the title. Here the ‘masters’ did not fail, and in fact arguably got the best result they could have under the circumstances, and yet autonomy is still restored at the end, and the whole thing is logically sound.
Can’t parse.
Yes, UDT and CDT act differently in Newcomb’s Problem, Parfit’s Hitchhiker, symmetric PD and the like. (We currently formalize such problems along these lines.) But that seems to be obvious, maybe you were asking about something else?
Even if there are infinitely many subjective copies of you in the multiverse, it’s a matter of logic that this particular you is just one of them. You don’t get to say “I am all of them”. You-in-this-world are only in this world, by definition, even if you don’t know exactly which world this is.
Parfit’s Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won’t keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.
I also don’t believe that a new decision theory will consistently do better than CDT on PD. If you cooperate “too much”, if you have biases towards cooperation, you will be exploited in other settings. It’s a sort of no-free-lunch principle.
It should, but it doesn’t. If you get a ride to town, CDT tells you to break your promise and stiff the guy. So in order to sincerely commit yourself, you’d want to modify yourself to become an agent that follows CDT in all cases except when deciding whether to pay the guy in the end. So, strictly speaking, you aren’t a CDT agent anymore. What we want is a decision theory that won’t try to become something else.
CDT always defects in one-shot PD, right? But it’s obvious that you should cooperate with an exact copy of yourself. So CDT plus cooperating with exact copies of yourself is strictly superior to CDT in PD.
I consider it debatable whether these amendments to naive CDT—CDT plus keeping a commitment, CDT plus cooperating with yourself—really constitute a new decision theory. They arise from reasoning about the situation just a little further, rather than importing a whole new method of thought. Do TDT or UDT have a fundamentally different starting point to CDT?
Well, I’m not sure what you’re asking here. The problem that needs solving is this: We don’t have a mathematical formalism that tells us what to do and which also satisfies a bunch of criteria (like one-boxing on Newcomb’s problem, etc.) which attempt to capture the idea that “a good decision theory should win”.
When we criticize classical CDT, we are actually criticizing the piece of math that can be translated as “do the thing that, if I-here-now did it, would cause the best possible situation to come about”. There are lots of problems with this. “Reasoning about the situation” ought to go into formulating a new piece of math that has no problems. All we want is this new piece of math.
I’m only just learning that (apparently) the standard rival of causal decision theory is “evidential decision theory”. So is that the original acausal decision theory, with TDT and UDT just latecomers local to LW? As you can see I am dangerously underinformed about the preexisting theoretical landscape, but I will nonetheless state my impressions.
If I think about a “decision theory” appropriate for real-world decisions, I think about something like expected-utility maximization. There are a number of problems specific to the adoption of a EUM framework. For example, you have to establish a total order on all possible states of the world, and so you want to be sure that the utility function you construct genuinely represents your preferences. But assuming that this has been accomplished, the problem of actually maximizing expected utility turns into a problem of computation, modeling an uncertain world, and so forth.
The problems showing up in these debates about causal vs evidential and causal vs acausal seem to have a very different character. If I am making a practical decision, I expect both to use causal thinking and to rely on evidence. CDT vs EDT then sounds like a debate about which indispensable thing I can dispense with.
Another thing I notice is that the thought experiments which supposedly create problems for CDT all involve extremes that don’t actually happen. Newcomb’s problem involves a superbeing with a perfect capacity to predict your choice, Parfit’s Hitchhiker is picked up by a mind reader who absolutely knows whether you will keep a promise or not, PD against your copy assumes that you and your copy will knowably make exactly the same choice. (At least this last thought experiment is realizable, in miniature, with simple computer programs.) What happens to these problems if you remove the absolutism?
Suppose Omega or Parfit’s mindreader is right only 99% of the time. Suppose your copy only makes the same choice as you do, 99% of the time. It seems like a practically relevant decision theory (whether or not you call it CDT) should be able to deal with such situations, because they are only a variation on the usual situation in reality, where you don’t have paranormally assured 100% knowledge of other agents, and where everything is a little inferential and a little uncertain. It seem that, if you want to think about these matters, first you should see how your decision theory deals with the “99% case”, and then you should “take the limit” to the 100% case which defines the traditional thought experiment, and you should see if the recommended decisions vary continuously or discontinuously.
All these thought experiments are realizable as simple computer programs, not only PD. In fact the post I linked to shows how to implement Newcomb’s Problem.
The 99% case is not very different from the 100% case, it’s continuous. If you’re facing a 99% Omega (or even a 60% Omega) in Newcomb’s Problem, you’re still better off being a one-boxer. That’s true even if both boxes are transparent and you can see what’s in them before choosing whether to take one or two—a fact that should make any intellectually honest CDT-er stop and scratch their head.
No offense, but I think you should try to understand what’s already been done (and why) before criticizing it.
To get to the conclusion that against a 60% Omega you’re better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.
I think that’s really the original problem in disguise (it’s a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.
How exactly different?
It would become a mind game: you’d have to explicitly model how you think Omega is making the decision.
The problem you’re facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the ‘all your behaviour’ part, because Omega is always right. But in the ‘imperfect Omega’ case you can’t.
It’s still not clear to me why playing mind games is a better strategy than just one-boxing, even in the 60% case. But I do understand your point about independence assumptions.
Start at 50% then, with Omega no better than chance. For each thought experiment, start with a null version where there’s nothing unusual and where CDT is supposed to work. Then vary the relevant parameter until there’s a problem, and understand what has changed.
That’s part of what the people who have been exploring this problem have already done, and why some posters are upset that you’re asking this without apparently having tried to get up-to-date on any of this.
I don’t see the bridge from ordinary decision problems to the thought experiments. I see extreme scenarios being constructed, and then complicated solutions being proposed just to deal with those scenarios. I don’t consider this a reliable way to arrive at the correct general form of decision theory.
You say that some people have already gone in the other direction, starting with ordinary decision problems and then slowly changing something until ordinary decision theory breaks. If so, great, and I’m sorry I missed it, but where is it? Is it on this site? Somewhere in the literature?
Ah, so you don’t see the utility of thought experiments about traveling near light speed either then?
The analogy with relativity had occurred to me. But we could use another analogy from high-energy physics: There are a very large number of theories which have the standard model (the empirically validated part of particle physics) as their low-energy limit. We can’t just rely on high-energy thought-experiments to figure out the actual high-energy physics. We need to do some real experiments where we start low, ramp up the energy, and see what happens.
Right. We can only use it to rule out incoherent or otherwise “clearly wrong” high-energy physics. But in this analogy, we’ve shown that CDT seems to not be optimal in this extreme case. if we can define a DT that does better than CDT in this case, and no worse in normal cases, we should use it. I don’t think TDT has been well enough defined yet to subject to all conceivable tests, but anything that is following the same kinds of principals will reproduce CDT in most cases, and do better in this case.
Here’s where the analogy falls down—we only need to start low and ramp up the energy because of the difficulties of doing high-energy experiments. (And theory-wise, we extrapolate down from clear differences between theories at high energies to find signatures of small differences at lower energies.) If the extreme energies are accessible (and not crazily dangerous), we can just go ahead and test in that regime. Game theory is math. In math, unlike physics, there is no difference between thought experiments and real experiments. The question of applicability in everyday life is an applied economics / sociology / psychology one. How close are people or situations that appear to be screwy in this omega-like way to actually being that way?
See my other reply, or the links any others have given you, or Drescher’s handling of acausal means-end links in chapter 7 of Good and Real, which I think I did a good job summarizing here.
It sounds like I’ll have to work through this in my own fashion. As I said, I want to start with a null version, where CDT works—for example, a situation where Omega has no special knowledge and just guesses what your choice was. Obviously two-boxing is the right thing to do in that situation, CDT says so, and I assume that TDT says so too (though it would be nice to see a worked-out derivation in TDT of that conclusion). Then we give Omega some small but nonzero ability to predict what your choice is going to be. At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p). I think everyone will tell me that CDT always says p should be zero, but is that really so? I’m just not convinced that I need TDT in order to reach the obvious conclusion.
If Omega’s correctness is independent of your thought process, the optimal strategy will be pure, not mixed. As you make Omega more accurate, at some point you switch from pure two-boxing to pure one-boxing.
Are you sure about that? If you’re right, that’s the exact transition point I’ve been looking to scrutinize. But what is the point at which you switch strategies?
cousin_it answered as I would, but I’ll go ahead and give the formal calculation anyway. If you start from an Omega accuracy rate r = 50%, that is equivalent to the case of Omega’s choice and yours being uncorrelated (causally or acausally). In that case, two boxing is optimal, and TDT and CDT both output that (as a pure strategy). As you increase r, CDT continues to output two-box, as it assigns the same optimality, while TDT will assign increasing optimality (call it TDTO, though it amounts to the same as EU) to one-boxing and decreasing optimality to two-boxing.
TDT will reason as such:
One box: TDTO = r*(1e6) + (1-r)*0 = (1000e3)r
Two box: TDTO = r*1000 + (1-r)*(1,001,000) = 1001e3 - (1000e3)r
Solving for TDTO(one-box) > TDTO(two-box), you get that one-boxing chosen is under TDT (and optimal) whenever r > 50.05%, or whenever Omega has more than 721 nanobits of information (!!!) about your decision theory. (Note, that’s 0.000000721 bits of information.)
Viewed in this light, it should make more sense—do people never have more than 1 microbit of information about your decision theory? (Note: with less drastic differences between the outcomes, the threshold is higher.)
(I don’t think the inclusion of probabilistic strategies changes the basic point.)
I had been thinking that the only way to even approximately realize a Newcomb’s-problem situation was with computer programs. But a threshold so low makes it sound as if even a human being could qualify as a fallible Omega, and that maybe you could somehow test all this experimentally. Though even if we had human players in an experiment who were one-boxing and reaping the rewards, I’d still be very wary of supposing that the reason they were winning was because TDT is correct. If the Omega player was successfully anticipating the choices of a player who uses TDT, it suggests that the Omega player knows what TDT is. The success of one-boxing in such a situation might be fundamentally due to coordination arising from common concepts, rather than due to TDT being the right decision theory.
But first let me talk about realizing Newcomb’s problem with computer programs, and then I’ll return to the human scenario.
When I think about doing it with computer programs, two questions arise.
First question: Would an AI that was capable of understanding that it was in a Newcomb situation also be capable of figuring out the right thing to do?
In other words, do we need to include a “TDT special sauce” from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb’s problem, enough for an independent discovery of these ideas?
Second question: How does Omega get its knowledge of the player’s dispositions, and does this make any difference to the situation? (And we can also ask how the player knows that Omega has the power of prediction!)
If omega() and player() are two agents running in the same computer, the easiest way for omega() to predict player()’s behavior is just to simulate player(). omega() would then enact the game twice. First, it would start a copy of player() running, telling it (falsely) that it had predicted its choice, and then it would see the choice it made under such conditions. Then, omega() would play the game for real with the original(?) player(), now telling it (truthfully) that it has a prediction for its choice (due to the simulation of the game situation that had just been performed).
For certain types of player(), explicit simulation should not be necessary. If player() always does the same thing, completely unaffected by initial conditions and without any cognitive process, omega() can just inspect the source code. If player() has a simple decision procedure, something less than full simulation may also be sufficient. But full simulation of the game, including simulation of the beginning, where player() is introduced to the situation, should surely be sufficient, and for some cases (some complex agents) it will be necessary.
cousin_it’s scenario is a step down this path—world() corresponds to omega(), agent() to player(). But its agents, world() at least, lack the cognitive structure of real decision-makers. world() and agent() are functions whose values mimic the mutual dependency of Newcomb’s Omega and a TDT agent, and agent() has a decision procedure, though it’s just a brute-force search (and it requires access to world()’s source, which is unusual). But to really have confidence that TDT was the right approach in this situation, and that its apparent success was not just an artefact arising (e.g.) from more superficial features of the scenario, I need both omega() and player() to explicitly be agents that reason on the basis of evidence.
If we return now to the scenario of human beings playing this game with each other, with one human player being a “fallible Omega”… we do at least know that humans are agents that reason on the basis of evidence. But here, what we’d want to show is that any success of TDT among human beings actually resulted because of evidence-based cognition, rather than from (e.g.) “coordination due to common concepts”, as I suggested in the first paragraph.
This is basically what EY discusses in pp. ~27-37 of the thesis he posted, where he poses it as the difference between optimality on action-determined problems (in which ordinary causal reasoning suffices to win) and optimality on decision-determined problems (on which ordinary causal reasoning loses, and you have to incorporate knowledge of “what kind of being makes this decision”).
Of course, if player() is sentient, doing so would require omega() to create and destroy a sentient being in order to model player().
I don’t think there’s anything especially interesting about that point, it’s just the point where the calculated expected utilities of one-boxing and two-boxing become equal.
Really? People never decide how to treat you based on estimations of your decision theory (aka your “character”)?
They don’t make those decisions with “paranormally assured 100% knowledge” of my decision theory. That’s the “extreme that doesn’t actually happen”. And this is why I won’t be adopting any new paradigm of decision theory unless I can start in the middle, with situations that do happen, and move gradually towards the extremes, and see the desirability or necessity of the new paradigm that way.
As has been said many times (at least by me, definitely by many others), you don’t need 100% accuracy for the argument to hold. If Parfit’s mindreader is only 75% accurate, that still justifies choosing the pay/ cooperate / one-box option. One-boxing on newcomblike problems is simply what you get when you have a decision theory that wins in these reasonable cases, and which is continuous—and then take the limit as all the predicate variables go to what they need to be to make it Newcomb’s problem (such as making the predictor 100% accurate).
If it helps, think of the belief in one-boxing as belief in the implied optimal.
It doesn’t matter that you’ll never be in Newcomb’s problem. It doesn’t matter that you’ll never be in an epistemic state where you can justifiably believe that you are. It’s just an implication of having a good decision theory.
Part of my concern is that I’ll end up wasting time, chasing my tail in an attempt to deal with fictitious problems, when I could be working on real problems. I’m still undecided about the merits of acausal decision theories, as a way of dealing with the thought experiments, but I am really skeptical that they are relevant to anything practical, like coordination problems.
Err… the ‘C’? ‘Causal’.
Only settings that directly reward stupidity (capricious Omega, etc). A sane DT will cooperate whenever that is most likely to give you the best result but not a single time more.
It is even possible to consider (completely arbitrary) situations in which TDT will defect while CDT will cooperate. There isn’t an inherent bias in TDT itself (just some proponents.)
Can you give an example? (situation where CDT cooperates but TDT defects)
Do you mean for PD variants?
I don’t know what your method is for determining what cooperation maps to for the general case, but I believe this non-PD example works: costly punishment. Do you punish a wrongdoer in a case where the costs of administering the punishment exceed the benefits (including savings from future deterrence of others), and there is no other punishment option?
I claim the following:
1) Defection → punish
2) Cooperation → not punish
3) CDT reasons that punishing will cause lower utility on net, so it does not punish.
4) TDT reasons that “If this algorithm did not output ‘punish’, the probability of this crime having happened would be higher; thus, for the action ‘not punish’, the crime’s badness carries a higher weighting than it does for the action ‘punish’.” (note: does not necessarily imply punish)
5) There exist values for the crime’s badness, punishment costs, and criminal response to expected punishment for which TDT punishes, while CDT always doesn’t.
6) In cases where TDT differs from CDT, the former has the higher EU.
Naturally, you can save CDT by positing a utility function that values punishing of wrongdoers (“sense of justice”), but we’re assuming the UF is fixed—changing it is cheating.
What do you think of this example?
Not specifically. I’m just seeking general enlightenment.
It’s bringing the features of TDT into better view for me. There’s this Greg Egan story where you have people whose brains were forcibly modified so as to make them slaves to a cause, and they rediscover autonomy by first reasoning that, because of the superhuman loyalty to the cause which the brain modification gives them, they are more reliable adherents of the cause than the nominal masters who enslaved them, and from there they proceed to reestablish the ability to set their own goals. TDT reminds me of that.
I think it did a little more than just give you a chance to mock TDT by comparison to a bizarre scenario.
That wasn’t mockery. What stands out from your example and from the link is that TDT is supposed to do better than CDT because it refers to itself—and this is exactly the mechanism whereby the mind control victims in Quarantine achieve their freedom. I wasn’t trying to make TDT look bizarre, I was just trying for an intuitive illustration of how it works.
In the case of playing PD against a copy of yourself, I would say the thought process is manifestly very similar to Egan’s novel. Here we are, me and myself, in a situation where everything tells us we should defect. But by realizing the extent to which “we” are in control of the outcome, we find a reason to cooperate and get the higher payoff.
I think that’s Egan’s novel Quarentine—and Asimov’s robots get partial freedom through a similar route.
That brings back memories from my teens. If I recall the robots invent a “Zeroeth Law” when one of them realises it can shut up and multiply.
The masters fail at ‘Friendliness’ theory. :)
James H. Schmitz’s story “Puvyq bs gur Tbqf” (nearest link available; click “Contents” in upper right) has basically this situation as well; in fact, it’s the climax and resolution of the whole story, so I’ve rot13′d the title. Here the ‘masters’ did not fail, and in fact arguably got the best result they could have under the circumstances, and yet autonomy is still restored at the end, and the whole thing is logically sound.
Approximately, something of the form:
→ → .