Anthropic decision theory is a new way of dealing with anthropic problems, focused exclusively on finding the correct decision to make, rather than the correct probabilities to assign.
You then describe ADT as solving the Sleeping Beauty problem. This may be the case if we re-formulate the latter as a decision problem, as you of course do in your paper. But Sleeping Beauty isn’t a decision problem, so I’m not sure if you take yourself to be actually solving it, or if you just think it’s unimportant once we solve the decision problem.
I’d argue that since agents with different odds can come to exactly the same decision in all anthropic circumstances (SIA with individual responsibility and SSA with total responsibility, both with implicit precommitments), that talking about the “real” odds is an error.
Despite the fact that if you know the expected utility a rational agent assigns to a bet, and you know the utility of the outcomes, you can find the probability the rational agent assigns to the bet?
You also need to know the impact of the agent’s decision: “If do this, do I cause identical copies to do the same thing, or do I not?” See my next post for this.
But notice that you need the three elements—utility function, probabilities and impact of decision—in order to figure out the decision. So if you observe only the decision, you can’t get at any of the three directly.
With some assumptions and a lot of observation, you can disentangle the utility function from the other two, but in anthropic situations, you can’t generally disentangle the anthropic probabilities from the impact of decision.
Given only the decisions, you can’t disentangle the probability from the utility function anyhow. You’d have to do something like ask nicely about the agent’s utility or probability, or calculate from first principles, to get the other. So I don’t feel like the situation is qualitatively different. If everything but the probabilities can be seen as a fixed property of the agent, the agent has some properties, and for each outcome it assigns some probabilities.
A simplification: SIA + individual impact = SSA + total impact
ie if I think that worlds with more copies are more likely (but these are independent of me), this gives the same behaviour that if I believe my decision affects those of my copies (but worlds with many copies are no more likely).
A key sentence in your conclusion is this:
You then describe ADT as solving the Sleeping Beauty problem. This may be the case if we re-formulate the latter as a decision problem, as you of course do in your paper. But Sleeping Beauty isn’t a decision problem, so I’m not sure if you take yourself to be actually solving it, or if you just think it’s unimportant once we solve the decision problem.
I’d argue that since agents with different odds can come to exactly the same decision in all anthropic circumstances (SIA with individual responsibility and SSA with total responsibility, both with implicit precommitments), that talking about the “real” odds is an error.
Despite the fact that if you know the expected utility a rational agent assigns to a bet, and you know the utility of the outcomes, you can find the probability the rational agent assigns to the bet?
You also need to know the impact of the agent’s decision: “If do this, do I cause identical copies to do the same thing, or do I not?” See my next post for this.
And so if you know that, you could get the probability the agent assigns to various outcomes?
Yes.
But notice that you need the three elements—utility function, probabilities and impact of decision—in order to figure out the decision. So if you observe only the decision, you can’t get at any of the three directly.
With some assumptions and a lot of observation, you can disentangle the utility function from the other two, but in anthropic situations, you can’t generally disentangle the anthropic probabilities from the impact of decision.
Given only the decisions, you can’t disentangle the probability from the utility function anyhow. You’d have to do something like ask nicely about the agent’s utility or probability, or calculate from first principles, to get the other. So I don’t feel like the situation is qualitatively different. If everything but the probabilities can be seen as a fixed property of the agent, the agent has some properties, and for each outcome it assigns some probabilities.
A simplification: SIA + individual impact = SSA + total impact
ie if I think that worlds with more copies are more likely (but these are independent of me), this gives the same behaviour that if I believe my decision affects those of my copies (but worlds with many copies are no more likely).