My source code contains a rule M that overrides everything else and is detectable by other agents. It says: I will precommit to cooperating (playing the Pareto-optimal outcome) if I can verify that the opponent’s source code contains M. Like a self-printing program (quine), no infinite recursion in sight. And, funnily enough, this statement can persuade other agents to modify their source code to include M—there’s no downside. Funky!
Something like this. Referring to an earlier discussion, “Cooperator” is an agent that implements M. Practical difficulties are all in signaling that you implement M, while actually implementing it may be easy (but pointless if you can’t signal it and can’t detect M in other agents).
The relation to Newcomb’s problem is that there is no need to implant a special-purpose algorithm like M you described above, you can guide all of your actions by a single decision theory that implements M as a special case (generalizes M if you like), and also solves Newcomb’s problem.
One inaccuracy here is that there are many Pareto optimal global strategies (in PD there are many if you allow mixed strategies), with different payoffs to different agents, and so they must first agree on which they’ll jointly implement. This creates a problem analogous to the Ultimatum game, or the problem of fairness.
you can guide all of your actions by a single decision theory that implements M as a special case (generalizes M if you like), and also solves Newcomb’s problem
Didn’t think about that. Now I’m curious: how does this decision theory work? And does it give incentive to other agents to adopt it wholesale, like M does?
That’s the idea. I more or less know how my version of this decision theory works, and I’m likely to write it up in the next few weeks. I wrote a little bit about it here (I changed my mind about causation, it’s easy enough to incorporate it here, but I’ll have to read up on Pearl first). There is also Eliezer’s version, that started the discussion, and that was never explicitly described, even on a surface level.
Overall, there seem to be no magic tricks, only the requirement for a philosophically sane problem statement, with inevitable and long-known math following thereafter.
OK, I seem to vaguely understand how your decision theory works, but I don’t see how it implements M as a special case. You don’t mention source code inspection anywhere.
What matters is the decision (and its dependence on other facts). Source code inspection is only one possible procedure for obtaining information about the decision. The decision theory doesn’t need to refer to a specific means of getting that information. I talked about a related issue here.
Forgive me if I’m being dumb, but I still don’t understand. If two similar agents (not identical to avoid the clones argument) play the PD using your decision theory, how do they arrive at C,C? Even if agents’ algorithms are common knowledge, a naive attempt to simulate the other guy just falls into bottomless recursion as usual. Is the answer somehow encoded in “the most general precommitment”? What do the agents precommit to? How does Pareto optimality enter the scene?
Something like this. Referring to an earlier discussion, “Cooperator” is an agent that implements M. Practical difficulties are all in signaling that you implement M, while actually implementing it may be easy (but pointless if you can’t signal it and can’t detect M in other agents).
The relation to Newcomb’s problem is that there is no need to implant a special-purpose algorithm like M you described above, you can guide all of your actions by a single decision theory that implements M as a special case (generalizes M if you like), and also solves Newcomb’s problem.
One inaccuracy here is that there are many Pareto optimal global strategies (in PD there are many if you allow mixed strategies), with different payoffs to different agents, and so they must first agree on which they’ll jointly implement. This creates a problem analogous to the Ultimatum game, or the problem of fairness.
Didn’t think about that. Now I’m curious: how does this decision theory work? And does it give incentive to other agents to adopt it wholesale, like M does?
That’s the idea. I more or less know how my version of this decision theory works, and I’m likely to write it up in the next few weeks. I wrote a little bit about it here (I changed my mind about causation, it’s easy enough to incorporate it here, but I’ll have to read up on Pearl first). There is also Eliezer’s version, that started the discussion, and that was never explicitly described, even on a surface level.
Overall, there seem to be no magic tricks, only the requirement for a philosophically sane problem statement, with inevitable and long-known math following thereafter.
OK, I seem to vaguely understand how your decision theory works, but I don’t see how it implements M as a special case. You don’t mention source code inspection anywhere.
What matters is the decision (and its dependence on other facts). Source code inspection is only one possible procedure for obtaining information about the decision. The decision theory doesn’t need to refer to a specific means of getting that information. I talked about a related issue here.
Forgive me if I’m being dumb, but I still don’t understand. If two similar agents (not identical to avoid the clones argument) play the PD using your decision theory, how do they arrive at C,C? Even if agents’ algorithms are common knowledge, a naive attempt to simulate the other guy just falls into bottomless recursion as usual. Is the answer somehow encoded in “the most general precommitment”? What do the agents precommit to? How does Pareto optimality enter the scene?