One upshot of this is that you can follow an explicitly non-(precise-)Bayesian decision procedure and still avoid dominated strategies. For example, you might explicitly specify beliefs using imprecise probabilities and make decisions using the “Dynamic Strong Maximality” rule, and still be immune to sure losses. Basically, Dynamic Strong Maximality tells you which plans are permissible given your imprecise credences, and you just pick one. And you could do this “picking” using additional substantive principles. Maybe you want to use another rule for decision-making with imprecise credences (e.g., maximin expected utility or minimax regret). Or maybe you want to account for your moral uncertainty (e.g., picking the plan that respects more deontological constraints).
Let’s say Alice have imprecise credences. Let’s say Alice follows the algorithm: “At each time-step t, I will use ‘Dynamic Strong Maximality’ to find all plans that aren’t dominated. I will pick between them using [some criteria]. Then I will take the action that plan recommends.” (And then at the next timestep t+1, you re-do everything I just said in the quotes.)
If Alice does this, does she ended up being dynamically inconsistent? (Vulnerable to dutch-books etc.)
(Maybe it varies depending on the criteria. I’m interested if you have a hunch for what the answer will be for the sort of criteria you listed: maximin expected utility, minimax regret, picking the plan that respects more deontological constraints.)
I.e., I’m interested in: If you want to use dynamic strong maximality to avoid dominated strategies, does that require you to either have the ability to commit to a plan or the inclination to consistently pick your plan from some prior epistemic perspective. (Like an “updateless” agent might.) Or do you automatically avoid dominated strategies even if you’re constantly recomputing your plan?
does that require you to either have the ability to commit to a plan or the inclination to consistently pick your plan from some prior epistemic perspective
You aren’t required to take an action (/start acting on a plan) that is worse from your current perspective than some alternative. Let maximality-dominated mean “w.r.t. each distribution in my representor, worse in expectation than some alternative.” (As opposed to “dominated” in the sense of “worse than an alternative with certainty”.) Then, in general you would need[1] to ask, “Among the actions/plans that are not maximality-dominated from my current perspective, which of these are dominated from my prior perspective?” And rule those out.
Just to confirm, this means that the thing I put in quotes would probably end up being dynamically inconsistent? In order to avoid that, I need to put in an additional step of also ruling out plans that would be dominated from some constant prior perspective? (It’s a good point that these won’t be dominated from my current perspective.)
(Not sure you’re claiming otherwise, but FWIW, I think this is fine — it’s true that there’s some computational cost to this step, but in this context we’re talking about the normative standard rather than what’s most pragmatic for bounded agents. And once we start talking about pragmatic challenges for bounded agents, I’d be pretty dubious that, e.g., “pick a very coarse-grained ‘best guess’ prior and very coarse-grained way of approximating Bayesian updating, and try to optimize given that” would be best according to the kinds of normative standards that favor indeterminate beliefs.)
Let’s say Alice have imprecise credences. Let’s say Alice follows the algorithm: “At each time-step t, I will use ‘Dynamic Strong Maximality’ to find all plans that aren’t dominated. I will pick between them using [some criteria]. Then I will take the action that plan recommends.” (And then at the next timestep t+1, you re-do everything I just said in the quotes.)
If Alice does this, does she ended up being dynamically inconsistent? (Vulnerable to dutch-books etc.)
(Maybe it varies depending on the criteria. I’m interested if you have a hunch for what the answer will be for the sort of criteria you listed: maximin expected utility, minimax regret, picking the plan that respects more deontological constraints.)
I.e., I’m interested in: If you want to use dynamic strong maximality to avoid dominated strategies, does that require you to either have the ability to commit to a plan or the inclination to consistently pick your plan from some prior epistemic perspective. (Like an “updateless” agent might.) Or do you automatically avoid dominated strategies even if you’re constantly recomputing your plan?
You aren’t required to take an action (/start acting on a plan) that is worse from your current perspective than some alternative. Let maximality-dominated mean “w.r.t. each distribution in my representor, worse in expectation than some alternative.” (As opposed to “dominated” in the sense of “worse than an alternative with certainty”.) Then, in general you would need[1] to ask, “Among the actions/plans that are not maximality-dominated from my current perspective, which of these are dominated from my prior perspective?” And rule those out.
If you care about diachronic norms of rationality, that is.
Just to confirm, this means that the thing I put in quotes would probably end up being dynamically inconsistent? In order to avoid that, I need to put in an additional step of also ruling out plans that would be dominated from some constant prior perspective? (It’s a good point that these won’t be dominated from my current perspective.)
That’s right.
(Not sure you’re claiming otherwise, but FWIW, I think this is fine — it’s true that there’s some computational cost to this step, but in this context we’re talking about the normative standard rather than what’s most pragmatic for bounded agents. And once we start talking about pragmatic challenges for bounded agents, I’d be pretty dubious that, e.g., “pick a very coarse-grained ‘best guess’ prior and very coarse-grained way of approximating Bayesian updating, and try to optimize given that” would be best according to the kinds of normative standards that favor indeterminate beliefs.)