So far as I understand the situation, the SIAI is working on decision theory because they want to be able to create an AI that can be guaranteed not to modify its own decision function.
There are circumstances where CDT agents will self-modify to use a different decision theory (e.g. Parfit’s Hitchhiker). If this happens (they believe), it will present a risk of goal-distortion, which is unFriendly.
Put another way: the objective isn’t to get two AIs to cooperate, the objective is to make it so that an AI won’t need to alter its decision function in order to cooperate with another AI. (Or any other theoretical bargaining partner.)
Does that make any sense? As a disclaimer, I definitely do not understand the issues here as well as the SIAI folks working on them.
I don’t think that’s quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).
As far as I understand SIAI, the idea is that decision theory is the basis of their proposed AI architecture, and they think it’s more promising than other AGI approaches and better suited to Friendliness content.
I don’t think that’s quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).
That sounds intriguing also. Again, a reference to something written by someone who understands it better might be helpful so as to make some sense of it.
Maybe it would be helpful to you to think of self-modifications and alternative decision theories as unrestricted precommitment. If you had the ability to irrevocably precommit to following any decision rule in the future, which rule would you choose? Surely it wouldn’t be pure CDT, because you can tractably identify situations where CDT loses.
you can tractably identify situations where CDT loses.
“Tractably” is a word that I find a bit unexpected in this context. What do you mean by it?
“Situations where CDT loses.” Are we talking about real-world-ish situations here? Situations in which causality applies? Situations in which the agents are free rather than being agents whose decisions have already been made for them by a programmer at some time in the past? What kind of situations do you have in mind?
And what do you mean by “loses”? Loses to who or what? Loses to agents that can foresee their opponent’s plays? Agents that have access to information channels not available to the CDT agent? Just what information channels are allowed? Why those, and not others?
ETA: And that “Surely it wouldn’t be CDT … because you can identify …” construction simply begs for completion with “Surely it would be … because you can’t identify …”. Do you have a candidate? Do you have a proof of “you can’t identify situations where it loses”. If not, what grounds do you have for criticizing?
CDT still loses to TDT in Newcomb’s problem if Omega has can predict your actions with better than 50.05% chances. You can’t get out of this by claiming that Omega has access to unrealistic information channels, because these chances seem fairly realistic to me.
Situations in which the agents are free rather than being agents whose decisions have already been made for them by a programmer at some time in the past?
Free from what? Causality? This sounds distressingly like you are relying on some notion of “free will”.
I understand that every normative decision theory adopts the assumption (convenient fiction if you prefer) that the agent being advised is acting of “his own free will”. Otherwise, why bother advising?
Being a compatibilist, as I understand Holy Scripture (i.e. The Sequences) instructs me to be, I see no incompatibility between this “fiction” of free will and the similar fiction of determinism. They model reality at different levels.
For certain purposes, it is convenient to model myself and other “free agents” as totally free in our decisions, but not completely free in carrying out those decisions. For example, my free will ego may decide to quit smoking, but my determined id has some probability of overruling that decision.
Why the distinction between agents which are free and agents which have had their decisions made for them by a programmer, then? Are you talking about cases in which specific circumstances have hard-coded behavioral responses? Every decision every agent makes is ultimately made for it by the agent’s programmer; I suppose I’m wondering where you draw the line.
As a side note, I feel very uncomfortable seeing the sequences referred to as inviolable scripture, even in jest. In my head, it just screams “oh my god how could anyone ever be doing it this wrong arghhhhhh.”
I’m still trying to figure out what I think of that reaction, and do not mention it as a criticism. I think.
Why the distinction between agents which are free and agents which have had their decisions made for them by a programmer, then? Are you talking about cases in which specific circumstances have hard-coded behavioral responses? Every decision every agent makes is ultimately made for it by the agent’s programmer; I suppose I’m wondering where you draw the line.
I make the distinction because the distinction is important. The programmer makes decisions at one point in time, with his own goals and/or utility functions, and his own knowledge of the world. The agent makes decisions at a different point in time, based on different values and different knowledge of the world. A decision theory which advises the programmer is not superior to a decision theory which advises the agent. Those two decision theories are playing different games.
“Totally free” sounds like too free. You’re not free to actually decide at time T to “decide X at time T+1″ and then actually decide Y at time T+1, since that is against the laws of physics.
It’s my understanding that what goes through your head when you actually decide X at time T+1 is (approximately) what we call TDT. Or you can stick to CDT and not be able to make decisions for your future self.
I upvoted this because it seems to contain a grain of truth, but I’m nervous that someone before me had downvoted it. I don’t know whether that was because it actually is just completely wrong about what TDT is all about, or because you went a bit over the top with “against the laws of physics”.
Situations where CDT loses are precisely those situations where credible precommitment helps you, and inability to credibly precommit hurts you. There’s no shortage of those in game theory.
Ok, those are indeed a reasonable class of decisions to consider. Now, you say that CDT loses. Ok, loses to what? And presumably you don’t mean loses to opponents of your preferred decision theory. You mean loses in the sense of doing less well in the same situation. Now, presumably that means that both CDT and your candidate are playing against the same game opponent, right?
I think you see where I am going here, though I can spell it out if you wish. In claiming the superiority of the other decision theory you are changing the game in an unfair way by opening a communication channel that didn’t exist in the original game statement and which CDT has no way to make use of.
Well, yeah, kind of, that’s one way to look at it. Reformulate the question like this: what would CDT do if that communication channel were available? What general precommitment for future situations would CDT adopt and publish? That’s the question TDT people are trying to solve.
what would CDT do if that communication channel were available?
The simplest answer that moves this conversation forward would be “It would pretend to be a TDT agent that keeps its commitments, whenever that act of deception is beneficial to it. It would keep accurate statistics on how often agents claiming to be TDT agents actually are TDT agents, and adjust its priors accordingly.”
Now it is your turn to explain why this strategy violates the rules, whereas your invention of a deception-free channel did not.
I’m going to have to refer you to Eliezer’s TDT document for that. (If you’re OK with starting in medias res, the first mention of this is on pages 22-23, though there it’s just specialized to Newcomb’s Dilemmas; see pages 50-52 for an example of the limits of this hack. Elsewhere he’s argued for the more general nature of the hack.)
I’m coming to realize just how much of this stuff derives from Eliezer’s insistance on reflective consistency of a decision theory. Given any decision theory, Eliezer will find an Omega to overthrow it.
But doesn’t a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega? Just as there is no enumeration of the reals, isn’t there a game which can make any specified rational agent regret its rationality? Omega holds all the cards. He can always make you regret your choice of decision theory.
Just as there is no enumeration of the reals, isn’t there a game which can make any specified rational agent regret its rationality? Omega holds all the cards. He can always make you regret your choice of decision theory.
No. We can ensure that no such problem exists if we assume that (1) only the output decisions are used, not any internals; and (2) every decision is made with access to the full problem statement.
I’m not entirely sure what “every decision is made with full access to the problem statement means”, but I can’t see how it can possibly get around the diagonalisation argument. Basically, Omega just says “I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X”
As cousin_it pointed out last time I brought this up, Caspian made this argument in response to the very first post on the Counterfactual Mugging. I’ve yet to see anyone point out a flaw in it as an existence proof.
As far as I can see the only premise needed for this diagonalisation to work is that your decision theory doesn’t agree with algorithm X on all possible decisions, so just make algorithm X “whatever happens, recite the Bible backwards 17 times”.
I’m not entirely sure what “every decision is made with full access to the problem statement means”, but I can’t see how it can possibly get around the diagonalisation argument. Basically, Omega just says “I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X”
In that case, your answer to problem A is being used in a context other than problem A. That other context is the real problem statement, and you didn’t have it when you chose your answer to A, so it violates the assumption.
Yeah, that definitely violates the “every decision is made with full access to the problem statement” condition. The outcome depends on your decision on problem A, but when making your decision on problem A you have no knowledge that your decision will also be used for this purpose.
I don’t see how this is useful. Let’s take a concrete example, let’s have decision problem A, Omega offers you the choice of $1,000,000, or being slapped in the face with a wet fish. Which would you like your decision theory to choose?
Now, No-mega can simulate you, say, 10 minutes before you find out who he is, and give you 3^^^3 utilons iff you chose the fish-slapping. So your algorithm has to include some sort of prior on the existence of “fish-slapping”-No-megas.
My algorithm “always get slapped in the face with a wet fish where that’s an option”, does better than any sensible algorithm on this particular problem, and I don’t see how this problem is noticeably less realistic than any others.
In other words, I guess I might be willing to believe that you can get around diagonalisation by posing some stringent limits on what sort of all-powerful Omegas you allow (can anyone point me to a proof of that?) but I don’t see how it’s interesting.
Now, No-mega can simulate you, say, 10 minutes before you find out who he is, and give you 3^^^3 utilons iff you chose the fish-slapping. So your algorithm has to include some sort of prior on the existence of “fish-slapping” No-megas.
Actually, no, the probability of fish-slapping No-megas is part of the input given to the decision theory, not part of the decision theory itself. And since every decision theory problem statement comes with an implied claim that it contains all relevant information (a completely unavoidable simplifying assumption), this probability is set to zero.
Decision theory is not about determining what sorts of problems are plausible, it’s about getting from a fully-specified problem description to an optimal answer. Your diagonalization argument requires that the problem not be fully specified in the first place.
“I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X”
This is a no-choice scenario. If you say that the Bible-reciter is the one that will “win” here, you are using the verb “to win” with a different meaning from the one used when we say that a particular agent “wins” by making the choice that leads to the best outcome.
But doesn’t a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega?
With the strong disclaimer that I have no background in decision theory beyond casually reading LW...
I don’t think so. The point of simulation (Omega) problems, to me, doesn’t seem to be to judo your intelligence against yourself; rather, it is to “throw your DT off the scent”, building weird connections between events (weird, but still vaguely possible, at least for AIs), that a particular DT isn’t capable of spotting and taking into account.
My human, real-life decision theory can be summarised as “look at as many possible end-result worlds as I can, and at what actions will bring them into being; evaluate how much I like each of them; then figure out which actions are most efficient at leading to the best worlds”. But that doesn’t exactly fly when you’re programming a computer, you need something that can be fully formalised, and that is where those strange Omega scenarios are useful, because your code must get it right “on autopilot”, it cannot improvise a smarter approach on the spot—the formula is on paper, and if it can’t solve a given problem, but another one can, it means that there is room for improvement.
In short, DT problems are just clever software debugging.
I agreed with everything you said after “I don’t think so”. So I am left confused as to why you don’t think so.
You analogize DT problems as test data used to determine whether we should accept or reject a decision theory. I am claiming that our requirements (i.e. “reflective consistency”) are so unrealistic that we will always be able to find test data forcing us to reject. Why do you not think so?
Because I suspect that there are only so many functionally different types of connections between events (at the very least, I see no hint that there must be infinitely many) and once you’ve found them all you will have the possibility of writing a DT that can’t be led to corner itself into suboptimal outcomes due to blind spots.
at the very least, I see no hint that there must be infinite ones
Am I correct in interpreting this as “infinitely many of them”? If so, I am curious as to what you mean by “functionally different types of connections between events”. Could you provide an example of some “types of connections between events”? Functionally different ones to be sure.
Presumably, the relevance must be your belief that decision theories differ in just how many of these different kinds of connections they handle correctly. Could you illustrate this by pointing out how the decision theory of your choice handles some types of connections, and why you have confidence that it does so correctly?
Am I correct in interpreting this as “infinitely many of them”?
Oops, yes. Fixed.
If so, I am curious as to what you mean by “functionally different types of connections between events”. Could you provide an example of some “types of connections between events”? Functionally different ones to be sure.
CDT can ‘see’ the classical, everyday causal connections that are marked in formulas with the symbol “>” (and I’d have to spend several hours reading at least the Stanford Encyclopaedia before I could give you a confident definition of that), but it cannot ‘see’ the connection in Newcomb’s problem between the agent’s choice of boxes and the content of the opaque box (sometimes called ‘retrocausality’).
Presumably, the relevance must be your belief that decision theories differ in just how many of these different kinds of connections they handle correctly. Could you illustrate this by pointing out how the decision theory of your choice handles some types of connections, and why you have confidence that it does so correctly?
I don’t have a favourite formal decision theory, because I am not sufficiently familiar with the underlying math and with the literature of discriminating scenarios to pick a horse. If you’re talking about the human decision “theory” of mine I described above, it doesn’t explicitly do that; the key hand-waving passage is “figure out which actions are most efficient at leading to the best worlds”, meaning I’ll use whatever knowledge I currently possess to estimate how big is the set of Everett branches where I do X and get A, compared to the set of those where I do X and get B. (For example, six months ago I hadn’t heard of the concept of acausal connections and didn’t account for them at all while plotting the likelihoods of possible futures, whereas now I do—at least technically; in practice, I think that between human agents they are a negligible factor. For another example, suppose that some years from now I became convinced that the complexity of human minds, and the variability between different ones, were much greater than I previously thought; then, given the formulation of Newcomb’s problem where Omega isn’t explicitly defined as a perfect simulator and all we know is that it has had a 100% success rate so far, I would suitably increase my estimation of the chances of Omega screwing up and making two-boxing profitable.)
CDT can ‘see’ the classical, everyday causal connections that are marked in formulas with the symbol “>” (and I’d have to spend several hours reading at least the Stanford Encyclopaedia before I could give you a confident definition of that), but it cannot ‘see’ the connection in Newcomb’s problem between the agent’s choice of boxes and the content of the opaque box (sometimes called ‘retrocausality’).
Ok, so if I understand you, there are only some finite number of valid kinds of connections between events and when we have all of them incorporated—when our decision theory can “see” each of them—we are then all done. We have the final, perfect decision theory (FPDT).
But what do you do then when someone—call him Yuri Geller—comes along and points out that we left out one important kind of connection: the “superspooky” connection. And then he provides some very impressive statistical evidence that this connection exists and sets up games in front of large (paying) audiences in which FPDT agents fail to WIN. He then proclaims the need for SSPDT.
Or, if you don’t buy that, maybe you will prefer this one. Yuri Geller doesn’t really exist. He is a thought experiment. Still the existence of even the possibility of superspooky connections proves that they really do exist and hence that we need to have SADT—Saint Anselm’s Decision Theory.
Ok, I’ve allowed my sarcasm to get the better of me. But the question remains—how are you ever going to know that you have covered all possible kinds of connections between events?
But the question remains—how are you ever going to know that you have covered all possible kinds of connections between events?
You can’t, I guess. Within an established mathematical model, it may be possible to prove that a list of possible configurations of event pairs {A, B} is exhaustive. But the model may always prove in need of expansion or refinement—whether because some element gets understood and modellised at a deeper level (eg the nature of ‘free’ will) or, more worryingly, because of paradigm shifts about physical reality (eg turns out we can time travel).
Decision theories should usually be seen as normative, not descriptive. How “realistic” something is, is not very important, especially for thought experiments. Decision theory cashes out where you find a situation that can indeed be analyzed with it, and where you’ll secure a better outcome by following theory’s advice. For example, noticing acausal control has advantages in many real-world situations (Parfit’s Hitchhiker variants). Eliezer’s TDT paper discusses this towards the end of Part I.
I believe you misinterpreted my “unrealistic requirements”. A better choice of words would have been “unachievably stringent requirements”. I wasn’t complaining that Omega and the like are unrealistic. At least not here.
The version I have of Eliezer’s TDT paper doesn’t have a “Part I”. It is dated “September 2010 and has 112 pages. Is there a better version available?
I don’t understand your other comments. Or, perhaps more accurately, I don’t understand what they were in response to.
Ah! Thank you. I see now. The circumstance in which a CDT agent will self modify to use a different decision theory are that:
The agent was programmed by Eliezer Yudkowsky and hence is just looking for an excuse to self-modify.
The agent is provided with a prior leading it to be open to the possibility of omnicient, yet perverse agents bearing boxes full of money.
The agent is supplied with (presumably faked) empirical data leading it to believe that all such omniscient agents reward one-boxers.
Since the agent seeks reflective equilibrium (because programmed by aforesaid Yudkowsky), and since it knows that CDT requires two boxing, and since it has no reason to doubt that causality is important in this world, it makes exactly the change to its decision theory that seems appropriate. It continues to use CDT except on Newcomb problems, where it one boxes. That is, it self-modifies to use a different decision theory, which we can call CDTEONPWIOB.
Well, ok, though I wouldn’t have said that these are cases where CDT agents do something weird. These are cases where EYDT agents do something weird.
I apologize if it seems that the target of my sarcasm is you WrongBot. It is not.
EY has deluded himself into thinking that reflective consistency is some kind of gold standard of cognitive stability. And then he uses reflective consistency as a lever by which completely fictitious data can uproot the fundamental algorithms of rationality.
Which would be fine, except that he has apparently convinced a lot of smart people here that he knows what he is talking about. Even though he has published nothing on the topic. Even though other smart people like Robin tell him that he is trying to solve an already solved problem.
I would say more but …
This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion:Bibliography
Gibbard, A., and Harper, W. L. (1978), “Counterfactuals and Two Kinds of Expected Utility”, in C. A. Hooker, J. J. Leach, and E. F. McClennen (eds.), Foundations and Applications of Decision Theory, vol. 1, Reidel, Dordrecht, pp. 125-162.
Reflective consistency is not a “gold standard”. It is a basic requirement. It should be easy to come up with terrible, perverse decision theories that are reflectively consistent (EY does so, sort of, in his TDT outline, though it’s not exactly serious / thorough). The point is not that reflective consistency is a sign you’re on the right track, but that a lack of it is a sign that something is really wrong, that your decision theory is perverse. If using your decision theory causes you to abandon that same decision theory, it can’t have been a very good decision theory.
Consider it as being something like monotonicity in a voting system; it’s a weak requirement for weeding out things that are clearly bad. (Well, perhaps not everyone would agree IRV is “clearly bad”, but… it isn’t even monotonic!) It just happens that in this case evidently nobody noticed before that this would be a good condition to satisfy and hence didn’t try. :)
Am not sure that decision theory is an “already solved” problem. There’s the issue of what happens when agents can self-modify—and so wirehead themselves. I am pretty sure that is an unresolved “grand challenge” problem.
Because it can’t find a write-up that explains how to use it?
Perhaps you can answer the questions that I asked here What play does TDT make in the game of Chicken? Can you point me to a description of TDT that would allow me to answer that question for myself?
Suppose I’m an agent implementing TDT. My decision in Chicken depends on how much I know about my opponent.
If I know my opponent implements the same decision procedure I do (because I have access to its source code, say), and my opponent has this knowledge about me, I swerve. In this case, my opponent and I are in symmetrical positions and its choice is fully determined by mine; my choice is between payoffs of (0,0) and (-10,-10).
Else, I act identically to a CDT agent.
As Eliezer says here, the one-sentence version of TDT is “Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.”
If I know my opponent implements the same decision procedure I do (because I have access to its source code, say), and my opponent has this knowledge about me, I swerve. In this case, my opponent and I are in symmetrical positions and its choice is fully determined by mine; my choice is between payoffs of (0,0) and (-10,-10).
I’m not sure this is right. Isn’t there a correlated equilibrium that does better?
I think we’re looking at different payoff matrices. I was using the formulation of Chicken that rewards
# | ….C....|.....D..... C | +0, +0 | −1,+1 D | +1, −1 | −10, −10
which doesn’t have a correlated equilibrium that beats (C,C).
Using the payoff matrix Perplexed posted here, there is indeed a correlated equilibrium, which I believe the TDT agents would arrive at (given a source of randomness). My bad for not specifying the exact game I was talking about.
Why do you believe the TDT agents would find the correlated equilibrium? Your previous statement and Eliezer quote suggested that a pair of TDT agents would always play symmetrically in a symmetric game. No “spontaneous symmetry breaking”.
Even without a shared random source, there is a Nash mixed equilibrium that is also better than symmetric cooperation. Do you believe TDT would play that if there were no shared random input?
In a symmetric game, TDT agents choose symmetric strategies. Without a source of randomness, this entails playing symmetrically as well.
I’m not sure why you’re talking about shared random input. If both agents get the same input, they can both be expected to treat it in the same way and make the same decision, regardless of the input’s source. Each agent needs an independent source of randomness in order to play the mixed equilibrium; if my strategy is to play C 30% of the time, I need to know whether this iteration is part of that 30%, which I can’t do deterministically because my opponent is simulating me.
Yeah, I think any use of correlated equilibrium here is wrong—that requires a shared random source. I think in this case we just get symmetric strategies, i.e., it reduces to superrationality, where they each just get their own private random source.
I’m not sure why you’re talking about shared random input.
Sorry if this was unclear. It was a reference to the correlated pair of random variables used in a correlated equilibrium. I was saying that even without such a correlated pair, you may presume the availability of independent random variables which would allow a Nash equilibrium—still better than symmetric play in this game.
Gah, wait. I feel dumb. Why would TDT find correlated equilibria? I think I had the “correlated equilibrium” concept confused. A correlated equilibrium would require a public random source, which two TDTers won’t have.
Ignoring the whole pi-is-not-known-to-be-normal thing, how do you determine which digit of pi to use when you can’t actually communicate and you have no idea how many digits of pi the other player may already know?
Thank you. I hope you realize that you have provided an example of a game in which CDT does better than TDT. For example, in the game with the payoff matrix shown below, there is a mixed strategy Nash equilibrium which is better than the symmetric cooperative result.
So TDT is different from CDT only in cases where the game is perfectly symmetric? If you are playing a game that is roughly the symmetric PD, except that one guy’s payoffs are shifted by a tiny +epsilon, then they should both defect?
TDT is different from CDT whenever one needs to consider the interaction of multiple decisions made using the same TDT-based decision procedure. This applies both to competitions between agents, as in the case of Chicken, and to cases where an agent needs to make credible precommitments, as in Newcomb’s Problem.
In the case of an almost-symmetric PD, the TDT agents should still cooperate. To change that, you’d have to make the PD asymmetrical enough that the agents were no longer evaluating their options in the same way. If a change is small enough that a CDT agent wouldn’t change its strategy, TDT agents would also ignore it.
This doesn’t strike me as the world’s greatest explanation, but I can’t think of a better way to formulate it. Please let me know if there’s something that’s still unclear.
If a change is small enough that a CDT agent wouldn’t change its strategy, TDT agents would also ignore it.
This strikes me as a bit bizarre. You test whether a warped PD is still close enough to symmetric by asking whether a CDT agent still defects in order to decide whether a TDT agent should still cooperate? Are you sure you are not just making up these rules as you go?
Please let me know if there’s something that’s still unclear.
Much is unclear and very little seems to be coherently written down. What amazes me is that there is so much confidence given to something no one can explain clearly. So far, the only stable thing in your description of TDT is that it is better than CDT.
So far as I understand the situation, the SIAI is working on decision theory because they want to be able to create an AI that can be guaranteed not to modify its own decision function.
There are circumstances where CDT agents will self-modify to use a different decision theory (e.g. Parfit’s Hitchhiker). If this happens (they believe), it will present a risk of goal-distortion, which is unFriendly.
Put another way: the objective isn’t to get two AIs to cooperate, the objective is to make it so that an AI won’t need to alter its decision function in order to cooperate with another AI. (Or any other theoretical bargaining partner.)
Does that make any sense? As a disclaimer, I definitely do not understand the issues here as well as the SIAI folks working on them.
I don’t think that’s quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).
As far as I understand SIAI, the idea is that decision theory is the basis of their proposed AI architecture, and they think it’s more promising than other AGI approaches and better suited to Friendliness content.
That sounds intriguing also. Again, a reference to something written by someone who understands it better might be helpful so as to make some sense of it.
Maybe it would be helpful to you to think of self-modifications and alternative decision theories as unrestricted precommitment. If you had the ability to irrevocably precommit to following any decision rule in the future, which rule would you choose? Surely it wouldn’t be pure CDT, because you can tractably identify situations where CDT loses.
“Tractably” is a word that I find a bit unexpected in this context. What do you mean by it?
“Situations where CDT loses.” Are we talking about real-world-ish situations here? Situations in which causality applies? Situations in which the agents are free rather than being agents whose decisions have already been made for them by a programmer at some time in the past? What kind of situations do you have in mind?
And what do you mean by “loses”? Loses to who or what? Loses to agents that can foresee their opponent’s plays? Agents that have access to information channels not available to the CDT agent? Just what information channels are allowed? Why those, and not others?
ETA: And that “Surely it wouldn’t be CDT … because you can identify …” construction simply begs for completion with “Surely it would be … because you can’t identify …”. Do you have a candidate? Do you have a proof of “you can’t identify situations where it loses”. If not, what grounds do you have for criticizing?
CDT still loses to TDT in Newcomb’s problem if Omega has can predict your actions with better than 50.05% chances. You can’t get out of this by claiming that Omega has access to unrealistic information channels, because these chances seem fairly realistic to me.
Free from what? Causality? This sounds distressingly like you are relying on some notion of “free will”.
(Apologies if I’m misreading you.)
I am relying on a notion of free will.
I understand that every normative decision theory adopts the assumption (convenient fiction if you prefer) that the agent being advised is acting of “his own free will”. Otherwise, why bother advising?
Being a compatibilist, as I understand Holy Scripture (i.e. The Sequences) instructs me to be, I see no incompatibility between this “fiction” of free will and the similar fiction of determinism. They model reality at different levels.
For certain purposes, it is convenient to model myself and other “free agents” as totally free in our decisions, but not completely free in carrying out those decisions. For example, my free will ego may decide to quit smoking, but my determined id has some probability of overruling that decision.
Why the distinction between agents which are free and agents which have had their decisions made for them by a programmer, then? Are you talking about cases in which specific circumstances have hard-coded behavioral responses? Every decision every agent makes is ultimately made for it by the agent’s programmer; I suppose I’m wondering where you draw the line.
As a side note, I feel very uncomfortable seeing the sequences referred to as inviolable scripture, even in jest. In my head, it just screams “oh my god how could anyone ever be doing it this wrong arghhhhhh.”
I’m still trying to figure out what I think of that reaction, and do not mention it as a criticism. I think.
I make the distinction because the distinction is important. The programmer makes decisions at one point in time, with his own goals and/or utility functions, and his own knowledge of the world. The agent makes decisions at a different point in time, based on different values and different knowledge of the world. A decision theory which advises the programmer is not superior to a decision theory which advises the agent. Those two decision theories are playing different games.
“Totally free” sounds like too free. You’re not free to actually decide at time T to “decide X at time T+1″ and then actually decide Y at time T+1, since that is against the laws of physics.
It’s my understanding that what goes through your head when you actually decide X at time T+1 is (approximately) what we call TDT. Or you can stick to CDT and not be able to make decisions for your future self.
I upvoted this because it seems to contain a grain of truth, but I’m nervous that someone before me had downvoted it. I don’t know whether that was because it actually is just completely wrong about what TDT is all about, or because you went a bit over the top with “against the laws of physics”.
Situations where CDT loses are precisely those situations where credible precommitment helps you, and inability to credibly precommit hurts you. There’s no shortage of those in game theory.
Ok, those are indeed a reasonable class of decisions to consider. Now, you say that CDT loses. Ok, loses to what? And presumably you don’t mean loses to opponents of your preferred decision theory. You mean loses in the sense of doing less well in the same situation. Now, presumably that means that both CDT and your candidate are playing against the same game opponent, right?
I think you see where I am going here, though I can spell it out if you wish. In claiming the superiority of the other decision theory you are changing the game in an unfair way by opening a communication channel that didn’t exist in the original game statement and which CDT has no way to make use of.
Well, yeah, kind of, that’s one way to look at it. Reformulate the question like this: what would CDT do if that communication channel were available? What general precommitment for future situations would CDT adopt and publish? That’s the question TDT people are trying to solve.
The simplest answer that moves this conversation forward would be “It would pretend to be a TDT agent that keeps its commitments, whenever that act of deception is beneficial to it. It would keep accurate statistics on how often agents claiming to be TDT agents actually are TDT agents, and adjust its priors accordingly.”
Now it is your turn to explain why this strategy violates the rules, whereas your invention of a deception-free channel did not.
I’m going to have to refer you to Eliezer’s TDT document for that. (If you’re OK with starting in medias res, the first mention of this is on pages 22-23, though there it’s just specialized to Newcomb’s Dilemmas; see pages 50-52 for an example of the limits of this hack. Elsewhere he’s argued for the more general nature of the hack.)
Ok thanks.
I’m coming to realize just how much of this stuff derives from Eliezer’s insistance on reflective consistency of a decision theory. Given any decision theory, Eliezer will find an Omega to overthrow it.
But doesn’t a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega? Just as there is no enumeration of the reals, isn’t there a game which can make any specified rational agent regret its rationality? Omega holds all the cards. He can always make you regret your choice of decision theory.
No. We can ensure that no such problem exists if we assume that (1) only the output decisions are used, not any internals; and (2) every decision is made with access to the full problem statement.
I’m not entirely sure what “every decision is made with full access to the problem statement means”, but I can’t see how it can possibly get around the diagonalisation argument. Basically, Omega just says “I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X”
As cousin_it pointed out last time I brought this up, Caspian made this argument in response to the very first post on the Counterfactual Mugging. I’ve yet to see anyone point out a flaw in it as an existence proof.
As far as I can see the only premise needed for this diagonalisation to work is that your decision theory doesn’t agree with algorithm X on all possible decisions, so just make algorithm X “whatever happens, recite the Bible backwards 17 times”.
In that case, your answer to problem A is being used in a context other than problem A. That other context is the real problem statement, and you didn’t have it when you chose your answer to A, so it violates the assumption.
Yeah, that definitely violates the “every decision is made with full access to the problem statement” condition. The outcome depends on your decision on problem A, but when making your decision on problem A you have no knowledge that your decision will also be used for this purpose.
I don’t see how this is useful. Let’s take a concrete example, let’s have decision problem A, Omega offers you the choice of $1,000,000, or being slapped in the face with a wet fish. Which would you like your decision theory to choose?
Now, No-mega can simulate you, say, 10 minutes before you find out who he is, and give you 3^^^3 utilons iff you chose the fish-slapping. So your algorithm has to include some sort of prior on the existence of “fish-slapping”-No-megas.
My algorithm “always get slapped in the face with a wet fish where that’s an option”, does better than any sensible algorithm on this particular problem, and I don’t see how this problem is noticeably less realistic than any others.
In other words, I guess I might be willing to believe that you can get around diagonalisation by posing some stringent limits on what sort of all-powerful Omegas you allow (can anyone point me to a proof of that?) but I don’t see how it’s interesting.
Actually, no, the probability of fish-slapping No-megas is part of the input given to the decision theory, not part of the decision theory itself. And since every decision theory problem statement comes with an implied claim that it contains all relevant information (a completely unavoidable simplifying assumption), this probability is set to zero.
Decision theory is not about determining what sorts of problems are plausible, it’s about getting from a fully-specified problem description to an optimal answer. Your diagonalization argument requires that the problem not be fully specified in the first place.
This is a no-choice scenario. If you say that the Bible-reciter is the one that will “win” here, you are using the verb “to win” with a different meaning from the one used when we say that a particular agent “wins” by making the choice that leads to the best outcome.
With the strong disclaimer that I have no background in decision theory beyond casually reading LW...
I don’t think so. The point of simulation (Omega) problems, to me, doesn’t seem to be to judo your intelligence against yourself; rather, it is to “throw your DT off the scent”, building weird connections between events (weird, but still vaguely possible, at least for AIs), that a particular DT isn’t capable of spotting and taking into account.
My human, real-life decision theory can be summarised as “look at as many possible end-result worlds as I can, and at what actions will bring them into being; evaluate how much I like each of them; then figure out which actions are most efficient at leading to the best worlds”. But that doesn’t exactly fly when you’re programming a computer, you need something that can be fully formalised, and that is where those strange Omega scenarios are useful, because your code must get it right “on autopilot”, it cannot improvise a smarter approach on the spot—the formula is on paper, and if it can’t solve a given problem, but another one can, it means that there is room for improvement.
In short, DT problems are just clever software debugging.
I agreed with everything you said after “I don’t think so”. So I am left confused as to why you don’t think so.
You analogize DT problems as test data used to determine whether we should accept or reject a decision theory. I am claiming that our requirements (i.e. “reflective consistency”) are so unrealistic that we will always be able to find test data forcing us to reject. Why do you not think so?
Because I suspect that there are only so many functionally different types of connections between events (at the very least, I see no hint that there must be infinitely many) and once you’ve found them all you will have the possibility of writing a DT that can’t be led to corner itself into suboptimal outcomes due to blind spots.
Am I correct in interpreting this as “infinitely many of them”? If so, I am curious as to what you mean by “functionally different types of connections between events”. Could you provide an example of some “types of connections between events”? Functionally different ones to be sure.
Presumably, the relevance must be your belief that decision theories differ in just how many of these different kinds of connections they handle correctly. Could you illustrate this by pointing out how the decision theory of your choice handles some types of connections, and why you have confidence that it does so correctly?
Oops, yes. Fixed.
CDT can ‘see’ the classical, everyday causal connections that are marked in formulas with the symbol “>” (and I’d have to spend several hours reading at least the Stanford Encyclopaedia before I could give you a confident definition of that), but it cannot ‘see’ the connection in Newcomb’s problem between the agent’s choice of boxes and the content of the opaque box (sometimes called ‘retrocausality’).
I don’t have a favourite formal decision theory, because I am not sufficiently familiar with the underlying math and with the literature of discriminating scenarios to pick a horse. If you’re talking about the human decision “theory” of mine I described above, it doesn’t explicitly do that; the key hand-waving passage is “figure out which actions are most efficient at leading to the best worlds”, meaning I’ll use whatever knowledge I currently possess to estimate how big is the set of Everett branches where I do X and get A, compared to the set of those where I do X and get B. (For example, six months ago I hadn’t heard of the concept of acausal connections and didn’t account for them at all while plotting the likelihoods of possible futures, whereas now I do—at least technically; in practice, I think that between human agents they are a negligible factor. For another example, suppose that some years from now I became convinced that the complexity of human minds, and the variability between different ones, were much greater than I previously thought; then, given the formulation of Newcomb’s problem where Omega isn’t explicitly defined as a perfect simulator and all we know is that it has had a 100% success rate so far, I would suitably increase my estimation of the chances of Omega screwing up and making two-boxing profitable.)
Ok, so if I understand you, there are only some finite number of valid kinds of connections between events and when we have all of them incorporated—when our decision theory can “see” each of them—we are then all done. We have the final, perfect decision theory (FPDT).
But what do you do then when someone—call him Yuri Geller—comes along and points out that we left out one important kind of connection: the “superspooky” connection. And then he provides some very impressive statistical evidence that this connection exists and sets up games in front of large (paying) audiences in which FPDT agents fail to WIN. He then proclaims the need for SSPDT.
Or, if you don’t buy that, maybe you will prefer this one. Yuri Geller doesn’t really exist. He is a thought experiment. Still the existence of even the possibility of superspooky connections proves that they really do exist and hence that we need to have SADT—Saint Anselm’s Decision Theory.
Ok, I’ve allowed my sarcasm to get the better of me. But the question remains—how are you ever going to know that you have covered all possible kinds of connections between events?
You can’t, I guess. Within an established mathematical model, it may be possible to prove that a list of possible configurations of event pairs {A, B} is exhaustive. But the model may always prove in need of expansion or refinement—whether because some element gets understood and modellised at a deeper level (eg the nature of ‘free’ will) or, more worryingly, because of paradigm shifts about physical reality (eg turns out we can time travel).
Decision theories should usually be seen as normative, not descriptive. How “realistic” something is, is not very important, especially for thought experiments. Decision theory cashes out where you find a situation that can indeed be analyzed with it, and where you’ll secure a better outcome by following theory’s advice. For example, noticing acausal control has advantages in many real-world situations (Parfit’s Hitchhiker variants). Eliezer’s TDT paper discusses this towards the end of Part I.
I believe you misinterpreted my “unrealistic requirements”. A better choice of words would have been “unachievably stringent requirements”. I wasn’t complaining that Omega and the like are unrealistic. At least not here.
The version I have of Eliezer’s TDT paper doesn’t have a “Part I”. It is dated “September 2010 and has 112 pages. Is there a better version available?
I don’t understand your other comments. Or, perhaps more accurately, I don’t understand what they were in response to.
“Part I” is chapters 1-9. (This concept is referred to in the paper itself.)
Not to me. But a reference might repair that deficiency on my part.
See Eliezer’s posts on Newcomb’s Problem and regret of rationality and TDT problems he can’t solve.
(Incidentally, I found those reference in about 30 seconds, starting from the LW Wiki page on Parfit’s Hitchhiker.)
Ah! Thank you. I see now. The circumstance in which a CDT agent will self modify to use a different decision theory are that:
The agent was programmed by Eliezer Yudkowsky and hence is just looking for an excuse to self-modify.
The agent is provided with a prior leading it to be open to the possibility of omnicient, yet perverse agents bearing boxes full of money.
The agent is supplied with (presumably faked) empirical data leading it to believe that all such omniscient agents reward one-boxers.
Since the agent seeks reflective equilibrium (because programmed by aforesaid Yudkowsky), and since it knows that CDT requires two boxing, and since it has no reason to doubt that causality is important in this world, it makes exactly the change to its decision theory that seems appropriate. It continues to use CDT except on Newcomb problems, where it one boxes. That is, it self-modifies to use a different decision theory, which we can call CDTEONPWIOB.
Well, ok, though I wouldn’t have said that these are cases where CDT agents do something weird. These are cases where EYDT agents do something weird.
I apologize if it seems that the target of my sarcasm is you WrongBot. It is not.
EY has deluded himself into thinking that reflective consistency is some kind of gold standard of cognitive stability. And then he uses reflective consistency as a lever by which completely fictitious data can uproot the fundamental algorithms of rationality. Which would be fine, except that he has apparently convinced a lot of smart people here that he knows what he is talking about. Even though he has published nothing on the topic. Even though other smart people like Robin tell him that he is trying to solve an already solved problem.
I would say more but …
This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion: Bibliography Gibbard, A., and Harper, W. L. (1978), “Counterfactuals and Two Kinds of Expected Utility”, in C. A. Hooker, J. J. Leach, and E. F. McClennen (eds.), Foundations and Applications of Decision Theory, vol. 1, Reidel, Dordrecht, pp. 125-162.
Reflective consistency is not a “gold standard”. It is a basic requirement. It should be easy to come up with terrible, perverse decision theories that are reflectively consistent (EY does so, sort of, in his TDT outline, though it’s not exactly serious / thorough). The point is not that reflective consistency is a sign you’re on the right track, but that a lack of it is a sign that something is really wrong, that your decision theory is perverse. If using your decision theory causes you to abandon that same decision theory, it can’t have been a very good decision theory.
Consider it as being something like monotonicity in a voting system; it’s a weak requirement for weeding out things that are clearly bad. (Well, perhaps not everyone would agree IRV is “clearly bad”, but… it isn’t even monotonic!) It just happens that in this case evidently nobody noticed before that this would be a good condition to satisfy and hence didn’t try. :)
Am not sure that decision theory is an “already solved” problem. There’s the issue of what happens when agents can self-modify—and so wirehead themselves. I am pretty sure that is an unresolved “grand challenge” problem.
TDT gets better outcomes than CDT when faced with Newcomb’s Problem, Parfit’s Hitchhiker, and the True Prisoner’s Dilemma.
When does CDT outperform TDT? If the answer is “never”, as it currently seems to be, why wouldn’t a CDT agent self-modify to use TDT?
Because it can’t find a write-up that explains how to use it?
Perhaps you can answer the questions that I asked here What play does TDT make in the game of Chicken? Can you point me to a description of TDT that would allow me to answer that question for myself?
Suppose I’m an agent implementing TDT. My decision in Chicken depends on how much I know about my opponent.
If I know my opponent implements the same decision procedure I do (because I have access to its source code, say), and my opponent has this knowledge about me, I swerve. In this case, my opponent and I are in symmetrical positions and its choice is fully determined by mine; my choice is between payoffs of (0,0) and (-10,-10).
Else, I act identically to a CDT agent.
As Eliezer says here, the one-sentence version of TDT is “Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.”
I’m not sure this is right. Isn’t there a correlated equilibrium that does better?
I think we’re looking at different payoff matrices. I was using the formulation of Chicken that rewards
which doesn’t have a correlated equilibrium that beats (C,C).
Using the payoff matrix Perplexed posted here, there is indeed a correlated equilibrium, which I believe the TDT agents would arrive at (given a source of randomness). My bad for not specifying the exact game I was talking about.
...and, this is what I get for not actually checking things before I post them.
Two questions:
Why do you believe the TDT agents would find the correlated equilibrium? Your previous statement and Eliezer quote suggested that a pair of TDT agents would always play symmetrically in a symmetric game. No “spontaneous symmetry breaking”.
Even without a shared random source, there is a Nash mixed equilibrium that is also better than symmetric cooperation. Do you believe TDT would play that if there were no shared random input?
In a symmetric game, TDT agents choose symmetric strategies. Without a source of randomness, this entails playing symmetrically as well.
I’m not sure why you’re talking about shared random input. If both agents get the same input, they can both be expected to treat it in the same way and make the same decision, regardless of the input’s source. Each agent needs an independent source of randomness in order to play the mixed equilibrium; if my strategy is to play C 30% of the time, I need to know whether this iteration is part of that 30%, which I can’t do deterministically because my opponent is simulating me.
Yeah, I think any use of correlated equilibrium here is wrong—that requires a shared random source. I think in this case we just get symmetric strategies, i.e., it reduces to superrationality, where they each just get their own private random source.
Sorry if this was unclear. It was a reference to the correlated pair of random variables used in a correlated equilibrium. I was saying that even without such a correlated pair, you may presume the availability of independent random variables which would allow a Nash equilibrium—still better than symmetric play in this game.
Gah, wait. I feel dumb. Why would TDT find correlated equilibria? I think I had the “correlated equilibrium” concept confused. A correlated equilibrium would require a public random source, which two TDTers won’t have.
Digits of pi are kind of like a public random source.
Ignoring the whole pi-is-not-known-to-be-normal thing, how do you determine which digit of pi to use when you can’t actually communicate and you have no idea how many digits of pi the other player may already know?
Same way you meet up in New York with someone you’ve never talked to: something like Schelling points. I’m not sure that answer works in practice.
Thank you. I hope you realize that you have provided an example of a game in which CDT does better than TDT. For example, in the game with the payoff matrix shown below, there is a mixed strategy Nash equilibrium which is better than the symmetric cooperative result.
Looks like we’re talking about different versions of Chicken. Please see my reply to Sniffnoy.
So TDT is different from CDT only in cases where the game is perfectly symmetric? If you are playing a game that is roughly the symmetric PD, except that one guy’s payoffs are shifted by a tiny +epsilon, then they should both defect?
TDT is different from CDT whenever one needs to consider the interaction of multiple decisions made using the same TDT-based decision procedure. This applies both to competitions between agents, as in the case of Chicken, and to cases where an agent needs to make credible precommitments, as in Newcomb’s Problem.
In the case of an almost-symmetric PD, the TDT agents should still cooperate. To change that, you’d have to make the PD asymmetrical enough that the agents were no longer evaluating their options in the same way. If a change is small enough that a CDT agent wouldn’t change its strategy, TDT agents would also ignore it.
This doesn’t strike me as the world’s greatest explanation, but I can’t think of a better way to formulate it. Please let me know if there’s something that’s still unclear.
This strikes me as a bit bizarre. You test whether a warped PD is still close enough to symmetric by asking whether a CDT agent still defects in order to decide whether a TDT agent should still cooperate? Are you sure you are not just making up these rules as you go?
Much is unclear and very little seems to be coherently written down. What amazes me is that there is so much confidence given to something no one can explain clearly. So far, the only stable thing in your description of TDT is that it is better than CDT.