your decision theory maps from decisions to situations
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that’s not it
But the point is that each theory should be capable of standing on its own
Can you say a little more about how ADT doesn’t stand on its own? After all, ADT is just defined as:
An ADT agent is an agent that would implement a self-confirming linking with any agent that would do the same. It would then maximises its expected utility, conditional on that linking, and using the standard non-anthropic probabilities of the various worlds.
Is the problem that it mentions expected utility, but it should be agnostic over values not expressible as utilities?
I hadn’t thought about this! I’d be interested in learning more about this. Do you have a suggested place to start reading or more search term suggestions (on top of Ewald)?
Also, can animals harbour malaria pathogens that harm humans? This section of the wiki page on malaria makes me think not, but it’s not explicitly stated