However, early/foundational community writing seems to reject the idea that there’s any meaningful conceptually distinct sense in which we can talk about an action being “reasonable.”
I think there’s a distinction (although I’m not sure if I’ve talked explicitly about it before). Basically there’s quite possibly more to what the “right” or “reasonable” action is than “what action that someone who tends to ‘win’ a lot over the course of their life would take?” because the latter isn’t well defined. In a multiverse the same strategy/policy would lead to 100% winning in some worlds/branches and 100% losing in other worlds/branches, so you’d need some kind of “measure” to say who wins overall. But what the right measure is seems to be (or could be) a normative fact that can’t be determined by just looking at or thinking “who tends to ‘win’ a lot’.
ETA: Another way that “tends to win” isn’t well defined is that if you look at the person who literally wins the most, they might just be very lucky instead of actually doing the “reasonable” thing. So I think “tends to win” is more of an intuition pump for what the right conception of “reasonable” is than actually identical to it.
I think there’s a distinction (although I’m not sure if I’ve talked explicitly about it before). Basically there’s quite possibly more to what the “right” or “reasonable” action is than “what action that someone who tends to ‘win’ a lot over the course of their life would take?” because the latter isn’t well defined. In a multiverse the same strategy/policy would lead to 100% winning in some worlds/branches and 100% losing in other worlds/branches, so you’d need some kind of “measure” to say who wins overall. But what the right measure is seems to be (or could beLW) a normative fact that can’t be determined by just looking at or thinking “who tends to ‘win’ a lot’.
I agree with you on this and think it’s a really important point. Another (possibly redundant) way of getting at a similar concern, without evoking MW:
Due to randomness/uncertainty, an agent that tries to maximize expected “winning” won’t necessarily win compared to an agent that does something else. If I spend a dollar on a lottery ticket with a one-in-a-billion chance of netting me a billion-and-one “win points,” then I’m taking the choice that maximizes expected winning but I’m also almost certain to lose. So we can’t treat “the action that maximizes expected winning” as synonymous with “the action taken by an agent that wins.”
We can try to patch up the issue here by defining “the action that I should take” as “the action that is consistent with the VNM axioms,” but in fact either action in this case is consistent with the VNM axioms. The VNM axioms don’t imply that an agent must maximize the expected desirability of outcomes. They just imply that an agent must maximize the expected value of some function. It is totally consistent with the axioms, for example, to be risk averse and instead maximize the expected square root of desirability. If we try to define “the action I should take” in this way, then, as another downside, the claim “your actions should be consistent with the VNM axioms” also becomes a completely empty tautology.
So it seems very hard to make non-vacuous and potentially true claims about decision theory without evoking some additional non-reducible notion of “reasonableness,” “rationality,” or what an actor “should” do. Assuming that normative anti-realism is true pretty much means assuming that there is no such notion or assuming that the notion doesn’t actually map onto anything in reality. And I think anti-realist views of these sort are plausible (probably for roughly the same reasons Eliezer seems to). But I think that adopting these views would also leave us with very little to say about decision theory.
I think there’s a distinction (although I’m not sure if I’ve talked explicitly about it before). Basically there’s quite possibly more to what the “right” or “reasonable” action is than “what action that someone who tends to ‘win’ a lot over the course of their life would take?” because the latter isn’t well defined. In a multiverse the same strategy/policy would lead to 100% winning in some worlds/branches and 100% losing in other worlds/branches, so you’d need some kind of “measure” to say who wins overall. But what the right measure is seems to be (or could be) a normative fact that can’t be determined by just looking at or thinking “who tends to ‘win’ a lot’.
ETA: Another way that “tends to win” isn’t well defined is that if you look at the person who literally wins the most, they might just be very lucky instead of actually doing the “reasonable” thing. So I think “tends to win” is more of an intuition pump for what the right conception of “reasonable” is than actually identical to it.
I agree with you on this and think it’s a really important point. Another (possibly redundant) way of getting at a similar concern, without evoking MW:
Due to randomness/uncertainty, an agent that tries to maximize expected “winning” won’t necessarily win compared to an agent that does something else. If I spend a dollar on a lottery ticket with a one-in-a-billion chance of netting me a billion-and-one “win points,” then I’m taking the choice that maximizes expected winning but I’m also almost certain to lose. So we can’t treat “the action that maximizes expected winning” as synonymous with “the action taken by an agent that wins.”
We can try to patch up the issue here by defining “the action that I should take” as “the action that is consistent with the VNM axioms,” but in fact either action in this case is consistent with the VNM axioms. The VNM axioms don’t imply that an agent must maximize the expected desirability of outcomes. They just imply that an agent must maximize the expected value of some function. It is totally consistent with the axioms, for example, to be risk averse and instead maximize the expected square root of desirability. If we try to define “the action I should take” in this way, then, as another downside, the claim “your actions should be consistent with the VNM axioms” also becomes a completely empty tautology.
So it seems very hard to make non-vacuous and potentially true claims about decision theory without evoking some additional non-reducible notion of “reasonableness,” “rationality,” or what an actor “should” do. Assuming that normative anti-realism is true pretty much means assuming that there is no such notion or assuming that the notion doesn’t actually map onto anything in reality. And I think anti-realist views of these sort are plausible (probably for roughly the same reasons Eliezer seems to). But I think that adopting these views would also leave us with very little to say about decision theory.