your decision theory maps from decisions to situations
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
But the point is that each theory should be capable of standing on its own
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
An ADT agent is an agent that would implement a self-confirming linking with any agent that would do the same. It would then maximises its expected utility, conditional on that linking, and using the standard non-anthropic probabilities of the various worlds.
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?