The question is, how much of this utility can be attributed to the agent’s decision rather than type.
To many two-boxers, this isn’t the question. At least some two-boxing proponents in the philosophical literature seem to distinguish between winning decisions and rational decisions, the contention being that winning decisions can be contingent on something stupid about the universe. For example, you could live in a universe that specifically rewards agents who use a particular decision theory, and that says nothing about the rationality of that decision theory.
I’m not convinced this is actually the appropriate way to interpret most two-boxers. I’ve read papers that say things that sound like this claim but I think the distinction that it generally being gestured at is the distinction I’m making here (with different terminology). I even think we get hints of that with the last sentence of your post where you start to talk about agent’s being rewards for their decision theory rather than their decision.
To many two-boxers, this isn’t the question. At least some two-boxing proponents in the philosophical literature seem to distinguish between winning decisions and rational decisions, the contention being that winning decisions can be contingent on something stupid about the universe. For example, you could live in a universe that specifically rewards agents who use a particular decision theory, and that says nothing about the rationality of that decision theory.
I’m not convinced this is actually the appropriate way to interpret most two-boxers. I’ve read papers that say things that sound like this claim but I think the distinction that it generally being gestured at is the distinction I’m making here (with different terminology). I even think we get hints of that with the last sentence of your post where you start to talk about agent’s being rewards for their decision theory rather than their decision.