So I think an account of anthropics that says “give me your values/morality and I’ll tell you what to do” is not an account of morality + anthropics, but has actually pulled out morality from an account of anthropics that shouldn’t have had it. (Schematically, rather than define adt(decisionProblem) = chooseBest(someValues, decisionProblem)
, you now have define adt(values, decisionProblem) = chooseBest(values, decisionProblem)
)
Perhaps you think that an account that makes mention of morality ends up being (partly) a theory of morality? And that also we should be able to understand anthropic situations apart from values?
To try and give some intuition for my way of thinking about things, suppose I flip a fair coin and ask agent A if it came up heads. If it guesses heads and is correct, it gets $100. If it guesses tails and is correct, both agents B and C get $100. Agents B and C are not derived from A in any special way and will not be offered similar problems—there is not supposed to be anything anthropic here.
What should agent A do? Well that depends on A’s values! This is going to be true for a non-anthropic decision theory so I don’t see why we should expect an anthropic decision theory to be free of this dependency.
Here’s another guess at something you might think: “anthropics is about probabilities. It’s cute that you can parcel up value-laden decisions and anthropics, but it’s not about decisions.”
Maybe that’s the right take. But even if so, ADT is useful! It says that in several anthropic situations, even if you’ve not sorted your anthropic probabilities out, you can still know what to do.
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?