you can guide all of your actions by a single decision theory that implements M as a special case (generalizes M if you like), and also solves Newcomb’s problem
Didn’t think about that. Now I’m curious: how does this decision theory work? And does it give incentive to other agents to adopt it wholesale, like M does?
That’s the idea. I more or less know how my version of this decision theory works, and I’m likely to write it up in the next few weeks. I wrote a little bit about it here (I changed my mind about causation, it’s easy enough to incorporate it here, but I’ll have to read up on Pearl first). There is also Eliezer’s version, that started the discussion, and that was never explicitly described, even on a surface level.
Overall, there seem to be no magic tricks, only the requirement for a philosophically sane problem statement, with inevitable and long-known math following thereafter.
OK, I seem to vaguely understand how your decision theory works, but I don’t see how it implements M as a special case. You don’t mention source code inspection anywhere.
What matters is the decision (and its dependence on other facts). Source code inspection is only one possible procedure for obtaining information about the decision. The decision theory doesn’t need to refer to a specific means of getting that information. I talked about a related issue here.
Forgive me if I’m being dumb, but I still don’t understand. If two similar agents (not identical to avoid the clones argument) play the PD using your decision theory, how do they arrive at C,C? Even if agents’ algorithms are common knowledge, a naive attempt to simulate the other guy just falls into bottomless recursion as usual. Is the answer somehow encoded in “the most general precommitment”? What do the agents precommit to? How does Pareto optimality enter the scene?
Didn’t think about that. Now I’m curious: how does this decision theory work? And does it give incentive to other agents to adopt it wholesale, like M does?
That’s the idea. I more or less know how my version of this decision theory works, and I’m likely to write it up in the next few weeks. I wrote a little bit about it here (I changed my mind about causation, it’s easy enough to incorporate it here, but I’ll have to read up on Pearl first). There is also Eliezer’s version, that started the discussion, and that was never explicitly described, even on a surface level.
Overall, there seem to be no magic tricks, only the requirement for a philosophically sane problem statement, with inevitable and long-known math following thereafter.
OK, I seem to vaguely understand how your decision theory works, but I don’t see how it implements M as a special case. You don’t mention source code inspection anywhere.
What matters is the decision (and its dependence on other facts). Source code inspection is only one possible procedure for obtaining information about the decision. The decision theory doesn’t need to refer to a specific means of getting that information. I talked about a related issue here.
Forgive me if I’m being dumb, but I still don’t understand. If two similar agents (not identical to avoid the clones argument) play the PD using your decision theory, how do they arrive at C,C? Even if agents’ algorithms are common knowledge, a naive attempt to simulate the other guy just falls into bottomless recursion as usual. Is the answer somehow encoded in “the most general precommitment”? What do the agents precommit to? How does Pareto optimality enter the scene?