Maybe I misunderstand what you mean by “updating beliefs based on action”. Here’s how I interpret it in the psychopath button case: When calculating the expected utility of pushing the button, don’t use the prior probability that you’re a psychopath in the calculation, use the probability that you’re a psychopath conditional on deciding to push the button (which is 1). If you use that conditional probability, then the expected utility of pushing the button is guaranteed to be negative, no matter what the prior probability that you’re a psychopath is. Similarly, when calculating the expected utility of not pushing the button, use the probability that you’re a psychopath conditional on deciding not to push the button.
But then, applying the same logic to the PD case, you should calculate expected utilities for your actions using probabilities for your clone’s action that are conditional on the very action that you are considering. So when you’re calculating the expected utility for cooperating, use probabilities for your clone’s action conditional on you cooperating (i.e., 1 for the clone cooperating, 0 for the clone defecting). When calculating the expected utility for defecting, use probabilities for your clone’s action conditional on you defecting (0 for cooperating, 1 for defecting). If you do things this way, then cooperating ends up having a higher expected utility.
Perhaps another way of putting it is that once you know the clone’s actions are perfectly correlated with your own, you have no good reason to treat the clone as an independent agent in your analysis. The standard tools of game theory, designed to deal with cases involving multiple independent agents, are no longer relevant. Instead, treat the clone as if he were part of the world-state in a standard single-agent decision problem, except this is a part of the world-state about which your actions give you information (kind of like whether or not you’re a psychopath in the button case).
Imagine you are absolutely certain you will cooperate and that your clone will cooperate. You are still capable of asking “what would my payoff be if I didn’t cooperate” and this payoff will be the payoff if you defect and the clone cooperates since you expect the clone to do whatever you will do and you expect to cooperate. There is no reason to update my belief on what the clone will do in this thought experiment since the thought experiment is about a zero probability event.
The psychopath case is different because I have uncertainty regarding whether I am a psychopath and the choice I want to make helps me learn about myself. I have no uncertainty concerning my clone.
There is no reason to update my belief on what the clone will do in this thought experiment since the thought experiment is about a zero probability event.
You are reasoning about an impossible scenario; if the probability of you reaching the event is 0, the probability of your clone reaching it is also 0. In order to make it a sensical notion, you have to consider it as epsilon probabilities; since the probability will be the same for both your and your clone, this gets you
%5E2%20+%20300%20\epsilon%20\cdot%20(1-\epsilon)%20+%20100%20\epsilon%5E2%20=%20200%20-%20400\epsilon%20+%20200\epsilon%5E2%20+%20300\epsilon%20-300%20\epsilon%5E2%20+%20100%20\epsilon%5E2%20=%20200%20-100\epsilon), which is maximized when epsilon=0.
To claim that you and your clone could take different actions is trying to make it a question about trembling-hand equilibria, which violates the basic assumptions of the game.
It’s common in game theory to consider off the equilibrium path situations that will occur with probability zero without taking a trembling hand approach.
Yes I should. In the psychopath case whether I press the button depends on my beliefs, in contrast in a PD I should defect regardless of my beliefs.
Maybe I misunderstand what you mean by “updating beliefs based on action”. Here’s how I interpret it in the psychopath button case: When calculating the expected utility of pushing the button, don’t use the prior probability that you’re a psychopath in the calculation, use the probability that you’re a psychopath conditional on deciding to push the button (which is 1). If you use that conditional probability, then the expected utility of pushing the button is guaranteed to be negative, no matter what the prior probability that you’re a psychopath is. Similarly, when calculating the expected utility of not pushing the button, use the probability that you’re a psychopath conditional on deciding not to push the button.
But then, applying the same logic to the PD case, you should calculate expected utilities for your actions using probabilities for your clone’s action that are conditional on the very action that you are considering. So when you’re calculating the expected utility for cooperating, use probabilities for your clone’s action conditional on you cooperating (i.e., 1 for the clone cooperating, 0 for the clone defecting). When calculating the expected utility for defecting, use probabilities for your clone’s action conditional on you defecting (0 for cooperating, 1 for defecting). If you do things this way, then cooperating ends up having a higher expected utility.
Perhaps another way of putting it is that once you know the clone’s actions are perfectly correlated with your own, you have no good reason to treat the clone as an independent agent in your analysis. The standard tools of game theory, designed to deal with cases involving multiple independent agents, are no longer relevant. Instead, treat the clone as if he were part of the world-state in a standard single-agent decision problem, except this is a part of the world-state about which your actions give you information (kind of like whether or not you’re a psychopath in the button case).
I agree with your first paragraph.
Imagine you are absolutely certain you will cooperate and that your clone will cooperate. You are still capable of asking “what would my payoff be if I didn’t cooperate” and this payoff will be the payoff if you defect and the clone cooperates since you expect the clone to do whatever you will do and you expect to cooperate. There is no reason to update my belief on what the clone will do in this thought experiment since the thought experiment is about a zero probability event.
The psychopath case is different because I have uncertainty regarding whether I am a psychopath and the choice I want to make helps me learn about myself. I have no uncertainty concerning my clone.
You are reasoning about an impossible scenario; if the probability of you reaching the event is 0, the probability of your clone reaching it is also 0. In order to make it a sensical notion, you have to consider it as epsilon probabilities; since the probability will be the same for both your and your clone, this gets you
%5E2%20+%20300%20\epsilon%20\cdot%20(1-\epsilon)%20+%20100%20\epsilon%5E2%20=%20200%20-%20400\epsilon%20+%20200\epsilon%5E2%20+%20300\epsilon%20-300%20\epsilon%5E2%20+%20100%20\epsilon%5E2%20=%20200%20-100\epsilon), which is maximized when epsilon=0.To claim that you and your clone could take different actions is trying to make it a question about trembling-hand equilibria, which violates the basic assumptions of the game.
It’s common in game theory to consider off the equilibrium path situations that will occur with probability zero without taking a trembling hand approach.