I’m generally against this approach because just because X can be modelled as Y doesn’t mean that Y is literally true. It mixes up anthropics and morality when these issues should be solved separately. Obviously, this is a neat trick, but I don’t see it as anything more.
I see it as necessary, because I don’t see Anthropic probabilities as actually meaning anything.
Standard probabilities are informally “what do I expect to see”, and this can be formalised as a cost function for making the wrong predictions.
In Anthropic situations, the “I” in that question is not clear—you, or you and your copies, or you and those similar to you? When you formalise this as cost function, you have to decide how to spread the cost amongst you different copies—do you spread it as a total cost, or an average one? In the first case, SIA emerges; in the second, SSA.
So you can’t talk about anthropic “probabilities” without including how much you care about the cost to your copies.
“So you can’t talk about anthropic “probabilities” without including how much you care about the cost to your copies”—Yeah, but that isn’t anything to do with morality, just individual preferences. And instead of using just a probability, you can define probability and the number of repeats.
It seems to me that ADT separates anthropics and morality. For example, Bayesianism doesn’t tell you what you should do, just how to update your beliefs. Given your beliefs, what you value decides what you should do. Similarly, ADT gives you an anthropic decision procedure. What exactly does it tell you to do? Well, that depends on your morality!
The point is that ADT is a theory of morality + anthropics. When your core theory of anthropics conceptually shouldn’t refer to morality at all, but should be independent.
So I think an account of anthropics that says “give me your values/morality and I’ll tell you what to do” is not an account of morality + anthropics, but has actually pulled out morality from an account of anthropics that shouldn’t have had it. (Schematically, rather than define adt(decisionProblem) = chooseBest(someValues, decisionProblem), you now have define adt(values, decisionProblem) = chooseBest(values, decisionProblem))
Perhaps you think that an account that makes mention of morality ends up being (partly) a theory of morality? And that also we should be able to understand anthropic situations apart from values?
To try and give some intuition for my way of thinking about things, suppose I flip a fair coin and ask agent A if it came up heads. If it guesses heads and is correct, it gets $100. If it guesses tails and is correct, both agents B and C get $100. Agents B and C are not derived from A in any special way and will not be offered similar problems—there is not supposed to be anything anthropic here.
What should agent A do? Well that depends on A’s values! This is going to be true for a non-anthropic decision theory so I don’t see why we should expect an anthropic decision theory to be free of this dependency.
Here’s another guess at something you might think: “anthropics is about probabilities. It’s cute that you can parcel up value-laden decisions and anthropics, but it’s not about decisions.”
Maybe that’s the right take. But even if so, ADT is useful! It says that in several anthropic situations, even if you’ve not sorted your anthropic probabilities out, you can still know what to do.
The way I see it, your morality defines a preference ordering over situations and your decision theory maps from decisions to situations. There can be some interaction there is that different moralities may want different inputs, ie. consequentialism only cares about the consequences, while others care about the actions that you chose. But the point is that each theory should be capable of standing on its own. And I agree with probability being somewhat ambiguous for anthropic situations, but our decision theory can just output betting outcomes instead of probabilities.
but our decision theory can just output betting outcomes instead of probabilities.
Indeed. And ADT outputs betting outcomes without any problems. It’s when you interpret them as probabilities that you start having problems, because in order to go from betting odds to probabilities, you have to sort out how much you value two copies of you getting a reward, versus one copy.
I suppose that makes sense if you’re a moral non-realist.
Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn’t matter and only the fact that you have a preference matters. The morality aspect is inessential.
your decision theory maps from decisions to situations
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
But the point is that each theory should be capable of standing on its own
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
An ADT agent is an agent that would implement a self-confirming linking with any agent that would do the same. It would then maximises its expected utility, conditional on that linking, and using the standard non-anthropic probabilities of the various worlds.
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?
I’m generally against this approach because just because X can be modelled as Y doesn’t mean that Y is literally true. It mixes up anthropics and morality when these issues should be solved separately. Obviously, this is a neat trick, but I don’t see it as anything more.
I see it as necessary, because I don’t see Anthropic probabilities as actually meaning anything.
Standard probabilities are informally “what do I expect to see”, and this can be formalised as a cost function for making the wrong predictions.
In Anthropic situations, the “I” in that question is not clear—you, or you and your copies, or you and those similar to you? When you formalise this as cost function, you have to decide how to spread the cost amongst you different copies—do you spread it as a total cost, or an average one? In the first case, SIA emerges; in the second, SSA.
So you can’t talk about anthropic “probabilities” without including how much you care about the cost to your copies.
“So you can’t talk about anthropic “probabilities” without including how much you care about the cost to your copies”—Yeah, but that isn’t anything to do with morality, just individual preferences. And instead of using just a probability, you can define probability and the number of repeats.
It seems to me that ADT separates anthropics and morality. For example, Bayesianism doesn’t tell you what you should do, just how to update your beliefs. Given your beliefs, what you value decides what you should do. Similarly, ADT gives you an anthropic decision procedure. What exactly does it tell you to do? Well, that depends on your morality!
The point is that ADT is a theory of morality + anthropics. When your core theory of anthropics conceptually shouldn’t refer to morality at all, but should be independent.
So I think an account of anthropics that says “give me your values/morality and I’ll tell you what to do” is not an account of morality + anthropics, but has actually pulled out morality from an account of anthropics that shouldn’t have had it. (Schematically, rather than
define adt(decisionProblem) = chooseBest(someValues, decisionProblem)
, you now havedefine adt(values, decisionProblem) = chooseBest(values, decisionProblem)
)Perhaps you think that an account that makes mention of morality ends up being (partly) a theory of morality? And that also we should be able to understand anthropic situations apart from values?
To try and give some intuition for my way of thinking about things, suppose I flip a fair coin and ask agent A if it came up heads. If it guesses heads and is correct, it gets $100. If it guesses tails and is correct, both agents B and C get $100. Agents B and C are not derived from A in any special way and will not be offered similar problems—there is not supposed to be anything anthropic here.
What should agent A do? Well that depends on A’s values! This is going to be true for a non-anthropic decision theory so I don’t see why we should expect an anthropic decision theory to be free of this dependency.
Here’s another guess at something you might think: “anthropics is about probabilities. It’s cute that you can parcel up value-laden decisions and anthropics, but it’s not about decisions.”
Maybe that’s the right take. But even if so, ADT is useful! It says that in several anthropic situations, even if you’ve not sorted your anthropic probabilities out, you can still know what to do.
The way I see it, your morality defines a preference ordering over situations and your decision theory maps from decisions to situations. There can be some interaction there is that different moralities may want different inputs, ie. consequentialism only cares about the consequences, while others care about the actions that you chose. But the point is that each theory should be capable of standing on its own. And I agree with probability being somewhat ambiguous for anthropic situations, but our decision theory can just output betting outcomes instead of probabilities.
Indeed. And ADT outputs betting outcomes without any problems. It’s when you interpret them as probabilities that you start having problems, because in order to go from betting odds to probabilities, you have to sort out how much you value two copies of you getting a reward, versus one copy.
Well, if anything that’s about your preferences, not morality.
Moral preferences are a specific subtype of preferences.
I suppose that makes sense if you’re a moral non-realist.
Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn’t matter and only the fact that you have a preference matters. The morality aspect is inessential.
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?