does that require you to either have the ability to commit to a plan or the inclination to consistently pick your plan from some prior epistemic perspective
You aren’t required to take an action (/start acting on a plan) that is worse from your current perspective than some alternative. Let maximality-dominated mean “w.r.t. each distribution in my representor, worse in expectation than some alternative.” (As opposed to “dominated” in the sense of “worse than an alternative with certainty”.) Then, in general you would need[1] to ask, “Among the actions/plans that are not maximality-dominated from my current perspective, which of these are dominated from my prior perspective?” And rule those out.
Just to confirm, this means that the thing I put in quotes would probably end up being dynamically inconsistent? In order to avoid that, I need to put in an additional step of also ruling out plans that would be dominated from some constant prior perspective? (It’s a good point that these won’t be dominated from my current perspective.)
(Not sure you’re claiming otherwise, but FWIW, I think this is fine — it’s true that there’s some computational cost to this step, but in this context we’re talking about the normative standard rather than what’s most pragmatic for bounded agents. And once we start talking about pragmatic challenges for bounded agents, I’d be pretty dubious that, e.g., “pick a very coarse-grained ‘best guess’ prior and very coarse-grained way of approximating Bayesian updating, and try to optimize given that” would be best according to the kinds of normative standards that favor indeterminate beliefs.)
You aren’t required to take an action (/start acting on a plan) that is worse from your current perspective than some alternative. Let maximality-dominated mean “w.r.t. each distribution in my representor, worse in expectation than some alternative.” (As opposed to “dominated” in the sense of “worse than an alternative with certainty”.) Then, in general you would need[1] to ask, “Among the actions/plans that are not maximality-dominated from my current perspective, which of these are dominated from my prior perspective?” And rule those out.
If you care about diachronic norms of rationality, that is.
Just to confirm, this means that the thing I put in quotes would probably end up being dynamically inconsistent? In order to avoid that, I need to put in an additional step of also ruling out plans that would be dominated from some constant prior perspective? (It’s a good point that these won’t be dominated from my current perspective.)
That’s right.
(Not sure you’re claiming otherwise, but FWIW, I think this is fine — it’s true that there’s some computational cost to this step, but in this context we’re talking about the normative standard rather than what’s most pragmatic for bounded agents. And once we start talking about pragmatic challenges for bounded agents, I’d be pretty dubious that, e.g., “pick a very coarse-grained ‘best guess’ prior and very coarse-grained way of approximating Bayesian updating, and try to optimize given that” would be best according to the kinds of normative standards that favor indeterminate beliefs.)