TL;DR Entities chasing each other in spirals through their minds may eventually meet and shake hands, but logic alone does NOT give you the ability to do this. You need access to each other’s source code.
It seems to me what “superrationality” is grasping towards is the idea that if both players can predict each other’s actions, it provides pragmactic grounds for cooperation. All the other crap, (the skewed payoff matrix, hofstader’s “sufficiently logical” terminology, even the connotations of the word “superrationality” itself) are red herrings.
This all hinges on the idea that your decision CAN affect their decision, through their mental emulation of you, and vice versa. If it’s one-sided, we have newcomb’s problem, except it collapses to a normal prisoner’s dilemma, since although omega knows if you’ll cooperate, you have no way of knowing if omega will cooperate, and thus he has no incentive to base his behavior on your decision, even though he knows it. He’s better off always defecting.
This is a point that a lot of people here seem to get confused about. They think “but, if I could predict omega’s actions, he’d have an incentive to conditionally cooperate, and so I’D have an incentive to cooperate, and we’d cooperate, and that’d be a better outcome, ergo that must be more rational, and omega is rational so he’ll act in a way I can predict, and specifically he’ll conditionally cooperate!!1”
But I think this is wrong. The fact that the world would be a better place if you could predict omega’s actions, (and the fact that omega knows this) doesn’t give omega the power to make you capable of predicting his actions, any more than it gives him the power to make your mom capable of predicting his actions, or to make a ladybug capable of predicting his actions, or another superintelligence capable of predicting his actions (although possibly it could to start with). He’s in another room.
The fact that he knows what you’re going to do means there’s already been some information leakage, since even a superintelligence can’t extrapolate from the fact that your name is jeff what decision you’ll make in a complicated game. He apparently knows quite a bit about you.
And if you knew ENOUGH about him, including his superhuman knowledge of yourself, and were smart enough to analyze the data (good luck), you’d be able to predict his actions too. But it seems disingenuous to even call that the prisoner’s dilemma.
TL;DR Entities chasing each other in spirals through their minds may eventually meet and shake hands, but logic alone does NOT give you the ability to do this. You need access to each other’s source code.
It seems to me what “superrationality” is grasping towards is the idea that if both players can predict each other’s actions, it provides pragmactic grounds for cooperation. All the other crap, (the skewed payoff matrix, hofstader’s “sufficiently logical” terminology, even the connotations of the word “superrationality” itself) are red herrings.
This all hinges on the idea that your decision CAN affect their decision, through their mental emulation of you, and vice versa. If it’s one-sided, we have newcomb’s problem, except it collapses to a normal prisoner’s dilemma, since although omega knows if you’ll cooperate, you have no way of knowing if omega will cooperate, and thus he has no incentive to base his behavior on your decision, even though he knows it. He’s better off always defecting.
This is a point that a lot of people here seem to get confused about. They think “but, if I could predict omega’s actions, he’d have an incentive to conditionally cooperate, and so I’D have an incentive to cooperate, and we’d cooperate, and that’d be a better outcome, ergo that must be more rational, and omega is rational so he’ll act in a way I can predict, and specifically he’ll conditionally cooperate!!1”
But I think this is wrong. The fact that the world would be a better place if you could predict omega’s actions, (and the fact that omega knows this) doesn’t give omega the power to make you capable of predicting his actions, any more than it gives him the power to make your mom capable of predicting his actions, or to make a ladybug capable of predicting his actions, or another superintelligence capable of predicting his actions (although possibly it could to start with). He’s in another room.
The fact that he knows what you’re going to do means there’s already been some information leakage, since even a superintelligence can’t extrapolate from the fact that your name is jeff what decision you’ll make in a complicated game. He apparently knows quite a bit about you.
And if you knew ENOUGH about him, including his superhuman knowledge of yourself, and were smart enough to analyze the data (good luck), you’d be able to predict his actions too. But it seems disingenuous to even call that the prisoner’s dilemma.