So I think an account of anthropics that says “give me your values/morality and I’ll tell you what to do” is not an account of morality + anthropics, but has actually pulled out morality from an account of anthropics that shouldn’t have had it. (Schematically, rather than define adt(decisionProblem) = chooseBest(someValues, decisionProblem), you now have define adt(values, decisionProblem) = chooseBest(values, decisionProblem))
Perhaps you think that an account that makes mention of morality ends up being (partly) a theory of morality? And that also we should be able to understand anthropic situations apart from values?
To try and give some intuition for my way of thinking about things, suppose I flip a fair coin and ask agent A if it came up heads. If it guesses heads and is correct, it gets $100. If it guesses tails and is correct, both agents B and C get $100. Agents B and C are not derived from A in any special way and will not be offered similar problems—there is not supposed to be anything anthropic here.
What should agent A do? Well that depends on A’s values! This is going to be true for a non-anthropic decision theory so I don’t see why we should expect an anthropic decision theory to be free of this dependency.
Here’s another guess at something you might think: “anthropics is about probabilities. It’s cute that you can parcel up value-laden decisions and anthropics, but it’s not about decisions.”
Maybe that’s the right take. But even if so, ADT is useful! It says that in several anthropic situations, even if you’ve not sorted your anthropic probabilities out, you can still know what to do.
The way I see it, your morality defines a preference ordering over situations and your decision theory maps from decisions to situations. There can be some interaction there is that different moralities may want different inputs, ie. consequentialism only cares about the consequences, while others care about the actions that you chose. But the point is that each theory should be capable of standing on its own. And I agree with probability being somewhat ambiguous for anthropic situations, but our decision theory can just output betting outcomes instead of probabilities.
but our decision theory can just output betting outcomes instead of probabilities.
Indeed. And ADT outputs betting outcomes without any problems. It’s when you interpret them as probabilities that you start having problems, because in order to go from betting odds to probabilities, you have to sort out how much you value two copies of you getting a reward, versus one copy.
I suppose that makes sense if you’re a moral non-realist.
Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn’t matter and only the fact that you have a preference matters. The morality aspect is inessential.
your decision theory maps from decisions to situations
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
But the point is that each theory should be capable of standing on its own
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
An ADT agent is an agent that would implement a self-confirming linking with any agent that would do the same. It would then maximises its expected utility, conditional on that linking, and using the standard non-anthropic probabilities of the various worlds.
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?
So I think an account of anthropics that says “give me your values/morality and I’ll tell you what to do” is not an account of morality + anthropics, but has actually pulled out morality from an account of anthropics that shouldn’t have had it. (Schematically, rather than
define adt(decisionProblem) = chooseBest(someValues, decisionProblem)
, you now havedefine adt(values, decisionProblem) = chooseBest(values, decisionProblem)
)Perhaps you think that an account that makes mention of morality ends up being (partly) a theory of morality? And that also we should be able to understand anthropic situations apart from values?
To try and give some intuition for my way of thinking about things, suppose I flip a fair coin and ask agent A if it came up heads. If it guesses heads and is correct, it gets $100. If it guesses tails and is correct, both agents B and C get $100. Agents B and C are not derived from A in any special way and will not be offered similar problems—there is not supposed to be anything anthropic here.
What should agent A do? Well that depends on A’s values! This is going to be true for a non-anthropic decision theory so I don’t see why we should expect an anthropic decision theory to be free of this dependency.
Here’s another guess at something you might think: “anthropics is about probabilities. It’s cute that you can parcel up value-laden decisions and anthropics, but it’s not about decisions.”
Maybe that’s the right take. But even if so, ADT is useful! It says that in several anthropic situations, even if you’ve not sorted your anthropic probabilities out, you can still know what to do.
The way I see it, your morality defines a preference ordering over situations and your decision theory maps from decisions to situations. There can be some interaction there is that different moralities may want different inputs, ie. consequentialism only cares about the consequences, while others care about the actions that you chose. But the point is that each theory should be capable of standing on its own. And I agree with probability being somewhat ambiguous for anthropic situations, but our decision theory can just output betting outcomes instead of probabilities.
Indeed. And ADT outputs betting outcomes without any problems. It’s when you interpret them as probabilities that you start having problems, because in order to go from betting odds to probabilities, you have to sort out how much you value two copies of you getting a reward, versus one copy.
Well, if anything that’s about your preferences, not morality.
Moral preferences are a specific subtype of preferences.
I suppose that makes sense if you’re a moral non-realist.
Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn’t matter and only the fact that you have a preference matters. The morality aspect is inessential.
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?