RSS

Time­less De­ci­sion Theory

TagLast edit: Feb 3, 2023, 1:02 AM by _will_

Timeless decision theory (TDT) is a decision theory developed by Eliezer Yudkowsky which, in slogan form, says that agents should decide as if they are determining the output of the abstract computation that they implement. This theory was developed in response to the view that rationality should be about winning (that is, about agents achieving their desired ends) rather than about behaving in a manner that we would intuitively label as rational. Prominent existing decision theories (including causal decision theory, or CDT) fail to choose the winning decision in some scenarios and so there is a need to develop a more successful theory.

Timeless Decision Theory has been replaced by Functional Decision Theory

<more needed>

TDT and Newcomb’s problem

A better sense of the motivations behind, and form of, TDT can be gained by considering a particular decision scenario: Newcomb’s problem. In Newcomb’s problem, a superintelligent artificial intelligence, Omega, presents you with a transparent box and an opaque box. The transparent box contains $1000 while the opaque box contains either $1,000,000 or nothing. You are given the choice to either take both boxes (called two-boxing) or just the opaque box (one-boxing). However, things are complicated by the fact that Omega is an almost perfect predictor of human behavior and has filled the opaque box as follows: if Omega predicted that you would one-box, it filled the box with $1,000,000 whereas if Omega predicted that you would two-box it filled it with nothing.

Many people find it intuitive that it is rational to two-box in this case. As the opaque box is already filled, you cannot influence its contents with your decision so you may as well take both boxes and gain the extra $1000 from the transparent box. CDT formalizes this style of reasoning. However, one-boxers win in this scenario. After all, if you one-box then Omega (almost certainly) predicted that you would do so and hence filled the opaque box with $1,000,000. So you will almost certainly end up with $1,000,000 if you one-box. On the other hand, if you two-box, Omega (almost certainly) predicted this and so left the opaque box empty . So you will almost certainly end up with $1000 (from the transparent box) if you two-box. Consequently, if rationality is about winning then it’s rational to one-box in Newcomb’s problem (and hence CDT fails to be an adequate decision theory).

TDT will endorse one-boxing in this scenario and hence endorses the winning decision. When Omega predicts your behavior, it carries out the same abstract computation as you do when you decide whether to one-box or two-box. To make this point clear, we can imagine that Omega makes this prediction by creating a simulation of you and observing its behavior in Newcomb’s problem. This simulation will clearly decide according to the same abstract computation as you do as both you and it decide in the same manner. Now, given that TDT says to act as if deciding the output of this computation, it tells you to act as if your decision to one-box can determine the behavior of the simulation (or, more generally, Omega’s prediction) and hence the filling of the boxes. So TDT correctly endorses one-boxing in Newcomb’s problem as it tells the agent to act as if doing so will lead them to get $1,000,000 instead of $1,000.

TDT and other decision scenarios

TDT also wins in a range of other cases including medical Newcomb’s problems, Parfit’s hitchhiker, and the one-shot prisoners’ dilemma. However, there are other scenarios where TDT does not win, including counterfactual mugging. This suggests that TDT still requires further development if it is to become a fully adequate decision theory. Given this, there is some motivation to also consider alternative decision theories alongside TDT, like updateless decision theory (UDT), which also wins in a range of scenarios but has its own problem cases. It seems likely that both of these theories draw on insights which are crucial to progressing our understanding of decision theory. So while TDT requires further development to be entirely adequate, it nevertheless represents a substantial step toward developing a decision theory that always endorses the winning decision

Formalization of TDT

Coming to fully grasp TDT requires an understanding of how the theory is formalized. Very briefly, TDT is formalized by supplementing causal Bayesian networks, which can be thought of as graphs representing causal relations, in two ways. First, these graphs should be supplemented with nodes representing abstract computations and an agent’s uncertainty about the result of these computations. Such a node might represent an agent’s uncertainty about the result of a mathematical sum. Second, TDT treats decisions as the abstract computation that underlies the agent’s decision process. These two features transform causal Bayesian networks into timeless decision diagrams. Using these supplemented diagrams, TDT is able to determine the winning decision in a whole range of a decision scenarios. For a more detailed description of the formalization of TDT, see Eliezer Yudkowsky’s timeless decision theory paper.

Further Reading

Notable Posts

External Links

See Also

Time­less De­ci­sion The­ory: Prob­lems I Can’t Solve

Eliezer YudkowskyJul 20, 2009, 12:02 AM
57 points
156 comments6 min readLW link

Time­less Control

Eliezer YudkowskyJun 7, 2008, 5:16 AM
47 points
69 comments9 min readLW link

Time­less Causality

Eliezer YudkowskyMay 29, 2008, 6:45 AM
48 points
67 comments8 min readLW link

Time­less Identity

Eliezer YudkowskyJun 3, 2008, 8:16 AM
61 points
248 comments14 min readLW link

Do Time­less De­ci­sion The­o­rists re­ject all black­mail from other Time­less De­ci­sion The­o­rists?

myrenNov 11, 2022, 12:38 AM
7 points
8 comments3 min readLW link

Time­less De­ci­sion The­ory and Meta-Cir­cu­lar De­ci­sion Theory

Eliezer YudkowskyAug 20, 2009, 10:07 PM
42 points
37 comments10 min readLW link

In­gre­di­ents of Time­less De­ci­sion Theory

Eliezer YudkowskyAug 19, 2009, 1:10 AM
52 points
232 comments7 min readLW link

A prob­lem with Time­less De­ci­sion The­ory (TDT)

Gary_DrescherFeb 4, 2010, 6:47 PM
48 points
140 comments3 min readLW link

One Doubt About Time­less De­ci­sion Theories

Chris_LeongOct 22, 2018, 1:39 AM
13 points
8 comments1 min readLW link

A Para­dox in Time­less De­ci­sion Theory

AlexMennenOct 25, 2010, 3:09 AM
10 points
7 comments1 min readLW link

Dis­cus­sion for Eliezer Yud­kowsky’s pa­per: Time­less De­ci­sion Theory

AlexeiJan 6, 2011, 12:28 AM
16 points
65 comments1 min readLW link

The Differ­ence Between Clas­si­cal, Ev­i­den­tial, and Time­less De­ci­sion Theories

DanielLCMar 26, 2011, 9:27 PM
6 points
32 comments1 min readLW link

How I Lost 100 Pounds Us­ing TDT

ZviMar 14, 2011, 3:50 PM
135 points
242 comments4 min readLW link

De­ci­sion The­o­ries: A Semi-For­mal Anal­y­sis, Part III

orthonormalApr 14, 2012, 7:34 PM
36 points
55 comments9 min readLW link

Time­less­ness as a Con­ser­va­tive Ex­ten­sion of Causal De­ci­sion Theory

[deleted]May 28, 2014, 2:57 PM
25 points
65 comments14 min readLW link

If you choose not to de­cide, you still have made a choice.

ZviMar 24, 2017, 9:12 PM
5 points
0 comments2 min readLW link

De­ci­sion The­o­ries: A Semi-For­mal Anal­y­sis, Part II

orthonormalApr 6, 2012, 6:59 PM
26 points
28 comments7 min readLW link

De­ci­sion The­ory FAQ

lukeprogFeb 28, 2013, 2:15 PM
119 points
487 comments58 min readLW link

New­comb’s para­dox com­plete solu­tion.

Augs SMSHacksMar 15, 2023, 5:56 PM
−12 points
13 comments3 min readLW link

Does Time Lin­ear­ity Shape Hu­man Self-Directed Evolu­tion, and will AGI/​ASI Tran­scend or Desta­bil­ise Real­ity?

EmmelyFeb 5, 2025, 7:58 AM
1 point
0 comments3 min readLW link

Anti-Parfit’s Hitchhiker

k64Feb 4, 2022, 11:37 PM
2 points
3 comments1 min readLW link

Break­ing New­comb’s Prob­lem with Non-Halt­ing states

SlimepriestessSep 4, 2022, 4:01 AM
16 points
9 comments5 min readLW link

FDT is not di­rectly com­pa­rable to CDT and EDT

SMKSep 29, 2022, 2:42 PM
40 points
8 comments21 min readLW link

Hu­mans do acausal co­or­di­na­tion all the time

Adam JermynNov 2, 2022, 2:40 PM
57 points
35 comments3 min readLW link