A question for TDT gurus. Do acausal trade, and other acausal coordinations, require a complete instance of each cooperating agent at each of the acausally connected sites? It seems that there at least has to be a model of each agent at each site, if not a “complete instance”.
For example, the TDT solution to Newcomb’s problem, as I understand it, amounts to you coordinating with the copy of you or the model of you which exists in Omega. There’s only one actively coordinating agent, you (Omega is only reactive to whatever you decide), and there’s a copy or a model of you at both ends of the arrangement.
Similarly, when people imagine AIs coordinating acausally—let’s say, two AIs in two different Tegmark-level-IV worlds—we can say that at the very least, each AI that is a party to the deal must have a concept of the other one’s existence, or else the deal could never get started. (If we imagine equilibria reached by whole populations of AIs scattered throughout a multiverse, then the local model may be of a subpopulation of AIs sharing a characteristic, rather than of individual AIs.) So it’s not just a matter of “I’m here and you’re there”. There has to be a model of you here, and there has to be a model of me there. But how detailed do the models have to be?
To focus on Newcomb’s problem: TDT still one-boxes even if Omega is a little bit bad at predicting whether or not you will one-box. How bad Omega can be while still resulting in TDT one-boxing depends on the precise rewards for one- and two-boxing.
Shminux asked a similar question a while ago and I forgot to tell him that it’s in the TDT paper. Hey, shminux: it’s in the TDT paper.
You can indeed figure out what TDT recommends just by knowing that Omega predicts with a certain accuracy; you don’t need to know how it makes its prediction.
But what I’m looking for is a realistic idea of the circumstances under which acausal deals can actually happen. People speculate about post-singularity intelligences throughout the multiverse establishing acausal equilibria with each other, and so on. I have in the past insisted that this is nonsense because, in a combinatorially exhaustive multiverse, every possible response to your actions will be taken somewhere. Any deal you think you have made is illusory, because even if there is a being somewhere else who acts like the deal-partner you have in mind, there should also be near-copies of that being which act in all other possible ways.
Also, there is nothing that requires an intelligent agent to engage in acausal dealmaking, even if it’s possible. If it’s a selfish agent, caring only about what happens to this instance of itself, then it literally has nothing to gain from acausally motivated behavior. It might be imagined that agents with impersonal utility functions, such as happiness maximizers or paperclip maximizers, have a reason to play the acausal game, because they will thereby have an affect on the amount of happiness or amount of paperclips in places beyond their immediate causal reach. But if acausal dealmaking is just an illusion, then even that won’t happen. It seems that a minimum necessary criterion, for acausal dealmaking to make sense, is the belief that the deal won’t be rendered meaningless by the heterogeneous behavior of our potential negotiating partners.
Returning to single-world acausal deals, how can Newcomb’s scenario actually come to pass? It requires an Omega that is a good enough predictor, and the agent who reacts to Omega’s offer has to have reason to believe that Omega is a good enough predictor. Presumably we can make this happen if Omega has a copy of the “source code” of the other agent, and if this can be proved to the other agent. One can then ask, how simple can these agents be, for these conditions to hold? Could you have simple Bayesian agents, in a simple software environment, which meet these conditions? And the other question is, how can you implement the weakening of this condition (Omega only a moderately good predictor, and known to be such), and does that affect the simplicity threshold?
A question for TDT gurus. Do acausal trade, and other acausal coordinations, require a complete instance of each cooperating agent at each of the acausally connected sites? It seems that there at least has to be a model of each agent at each site, if not a “complete instance”.
For example, the TDT solution to Newcomb’s problem, as I understand it, amounts to you coordinating with the copy of you or the model of you which exists in Omega. There’s only one actively coordinating agent, you (Omega is only reactive to whatever you decide), and there’s a copy or a model of you at both ends of the arrangement.
Similarly, when people imagine AIs coordinating acausally—let’s say, two AIs in two different Tegmark-level-IV worlds—we can say that at the very least, each AI that is a party to the deal must have a concept of the other one’s existence, or else the deal could never get started. (If we imagine equilibria reached by whole populations of AIs scattered throughout a multiverse, then the local model may be of a subpopulation of AIs sharing a characteristic, rather than of individual AIs.) So it’s not just a matter of “I’m here and you’re there”. There has to be a model of you here, and there has to be a model of me there. But how detailed do the models have to be?
To focus on Newcomb’s problem: TDT still one-boxes even if Omega is a little bit bad at predicting whether or not you will one-box. How bad Omega can be while still resulting in TDT one-boxing depends on the precise rewards for one- and two-boxing.
Shminux asked a similar question a while ago and I forgot to tell him that it’s in the TDT paper. Hey, shminux: it’s in the TDT paper.
You can indeed figure out what TDT recommends just by knowing that Omega predicts with a certain accuracy; you don’t need to know how it makes its prediction.
But what I’m looking for is a realistic idea of the circumstances under which acausal deals can actually happen. People speculate about post-singularity intelligences throughout the multiverse establishing acausal equilibria with each other, and so on. I have in the past insisted that this is nonsense because, in a combinatorially exhaustive multiverse, every possible response to your actions will be taken somewhere. Any deal you think you have made is illusory, because even if there is a being somewhere else who acts like the deal-partner you have in mind, there should also be near-copies of that being which act in all other possible ways.
Also, there is nothing that requires an intelligent agent to engage in acausal dealmaking, even if it’s possible. If it’s a selfish agent, caring only about what happens to this instance of itself, then it literally has nothing to gain from acausally motivated behavior. It might be imagined that agents with impersonal utility functions, such as happiness maximizers or paperclip maximizers, have a reason to play the acausal game, because they will thereby have an affect on the amount of happiness or amount of paperclips in places beyond their immediate causal reach. But if acausal dealmaking is just an illusion, then even that won’t happen. It seems that a minimum necessary criterion, for acausal dealmaking to make sense, is the belief that the deal won’t be rendered meaningless by the heterogeneous behavior of our potential negotiating partners.
Returning to single-world acausal deals, how can Newcomb’s scenario actually come to pass? It requires an Omega that is a good enough predictor, and the agent who reacts to Omega’s offer has to have reason to believe that Omega is a good enough predictor. Presumably we can make this happen if Omega has a copy of the “source code” of the other agent, and if this can be proved to the other agent. One can then ask, how simple can these agents be, for these conditions to hold? Could you have simple Bayesian agents, in a simple software environment, which meet these conditions? And the other question is, how can you implement the weakening of this condition (Omega only a moderately good predictor, and known to be such), and does that affect the simplicity threshold?