I don’t think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here—as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)
Your comment implies you’re talking about policy, which must be modelled as an iterated game. I don’t deny that one-boxing is good in the iterated game.
My concern in this post is that there’s been a lack of distinction in the community between “one-boxing is the best policy” and “one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment.” This lack of distinction has led many people into wishful or magical rather than rational thinking.
I don’t think that what you need has any bearing on what reality has actually given you.
As far as I can tell, I would pay Parfit’s Hitchhiker because of intuitions that were rewarded by natural selection. It would be nice to have a formalization that agrees with those intuitions.
or by sneaking in different metaphysics
This seems wrong to me, if you’re explicitly declaring different metaphysics (if you mean the thing by metaphysics that I think you mean). If I view myself as a function that generates an output based on inputs, and my decision-making procedure being the search for the best such function (for maximizing utility), then this could be considered as different metaphysics from trying to cause the most increase in utility for myself by making decisions, but it’s not obvious that the latter leads to better decisions.
I don’t think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here—as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)
Your comment implies you’re talking about policy, which must be modelled as an iterated game. I don’t deny that one-boxing is good in the iterated game.
My concern in this post is that there’s been a lack of distinction in the community between “one-boxing is the best policy” and “one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment.” This lack of distinction has led many people into wishful or magical rather than rational thinking.
As far as I can tell, I would pay Parfit’s Hitchhiker because of intuitions that were rewarded by natural selection. It would be nice to have a formalization that agrees with those intuitions.
This seems wrong to me, if you’re explicitly declaring different metaphysics (if you mean the thing by metaphysics that I think you mean). If I view myself as a function that generates an output based on inputs, and my decision-making procedure being the search for the best such function (for maximizing utility), then this could be considered as different metaphysics from trying to cause the most increase in utility for myself by making decisions, but it’s not obvious that the latter leads to better decisions.