But the CCT only says that if you satisfy [blah], your policy is consistent with precise EV maximization. This doesn’t imply your policy is inconsistent with Maximality, nor (as far as I know) does it tell you what distribution with respect to which you should maximize precise EV in order to satisfy [blah] (or even that such a distribution is unique). So I don’t see a positive case here for precise EV maximization [ETA: as a procedure to guide your decisions, that is]. (This is my also response to your remark below about “equivalent to “act consistently with being an expected utility maximizer”.”)
I agree that any precise EV maximization (which imo = any good policy) is consistent with some corresponding maximality rule — in particular, with the maximality rule with the very same single precise probability distribution and the same utility function (at least modulo some reasonable assumptions about what ‘permissibility’ means). Any good policy is also consistent with any maximality rule that includes its probability distribution as one distribution in the set (because this guarantees that the best-according-to-the-precise-EV-maximization action is always permitted), as well as with any maximality rule that makes anything permissible. But I don’t see how any of this connects much to whether there is a positive case for precise EV maximization? If you buy the CCT’s assumptions, then you literally do have an argument that anything other than precise EV maximization is bad, right, which does sound like a positive case for precise EV maximization (though not directly in the psychological sense)?
ETA: as a procedure to guide your decisions, that is
Ok, maybe you’re saying that the CCT doesn’t obviously provide an argument for it being good to restructure your thinking into literally maintaining some huge probability distribution on ‘outcomes’ and explicitly maintaining some function from outcomes to the reals and explicitly picking actions such that the utility conditional on these actions having been taken by you is high (or whatever)? I agree that trying to do this very literally is a bad idea, eg because you can’t fit all possible worlds (or even just one world) in your head, eg because you don’t know likelihoods given hypotheses as you’re not logically omniscient, eg because there are difficulties with finding yourself in the world, etc — when taken super literally, the whole shebang isn’t compatible with the kinds of good reasoning we actually can do and do do and want to do. I should say that I didn’t really track the distinction between the psychological and behavioral question carefully in my original response, and had I recognized you to be asking only about the psychological aspect, I’d perhaps have focused on that more carefully in my original answer. Still, I do think the CCT has something to say about the psychological aspect as well — it provides some pro tanto reason to reorganize aspects of one’s reasoning to go some way toward assigning coherent numbers to propositions and thinking of decisions as having some kinds of outcomes and having a schema for assigning a number to each outcome and picking actions that lead to high expectations of this number. This connection is messy, but let me try to say something about what it might look like (I’m not that happy with the paragraph I’m about to give and I feel like one could write a paper at this point instead). The CCT says that if you ‘were wise’ — something like ‘if you were to be ultimately content with what you did when you look back at your life’ — your actions would need to be a particular way (from the outside). Now, you’re pretty interested in being content with your actions (maybe just instrumentally, because maybe you think that has to do with doing more good or being better). In some sense, you know you can’t be fully content with them (because of the reasons above). But it makes sense to try to move toward being more content with your actions. One very reasonable way to achieve this is to incorporate some structure into your thinking that makes your behavior come closer to having these desired properties. This can just look like the usual: doing a bayesian calculation to diagnose a health problem, doing an EV calculation to decide which research project to work on, etc..
(There’s a chance you take there to be another sense in which we can ask about the reasonableness of expected utility maximization that’s distinct from the question that broadly has to do with characterizing behavior and also distinct from the question that has to do with which psychology one ought to choose for oneself — maybe something like what’s fundamentally principled or what one ought to do here in some other sense — and you’re interested in that thing. If so, I hope what I’ve said can be translated into claims about how the CCT would relate to that third thing.)
Anyway, If the above did not provide a decent response to what you said, then it might be worthwhile to also look at the appendix (which I ended up deprecating after understanding that you might only be interested in the psychological aspect of decision-making). In that appendix, I provide some more discussion of the CCT saying that [maximality rules which aren’t behaviorally equivalent to expected utility maximization are dominated]. I also provide some discussion recentering the broader point I wanted to make with that bullet point that CCT-type stuff is a big red arrow pointing toward expected utility maximization, whereas no remotely-as-big red arrow is known for [imprecise probabilities + maximality].
e.g. if one takes the cost of thinking into account in the calculation, or thinks of oneself as choosing a policy
Could you expand on this with an example? I don’t follow.
For example, preferential gaps are sometimes justified by appeals to cases like: “you’re moving to another country. you can take with you your Fabergé egg xor your wedding album. you feel like each is very cool, and in a different way, and you feel like you are struggling to compare the two. given this, it feels fine for you to flip a coin to decide which one (or to pick the one on the left, or to ‘just pick one’) instead of continuing to think about it. now you remember you have 10 dollars inside the egg. it still seems fine to flip a coin to decide which one to take (or to pick the one on the left, or to ‘just pick one’).”. And then one might say one needs preferential gaps to capture this. But someone sorta trying to maximize expected utility might think about this as: “i’ll pick a randomization policy for cases where i’m finding two things hard to compare. i think this has good EV if one takes deliberation costs into account, with randomization maybe being especially nice given that my utility is concave in the quantities of various things.”.
Maximality and imprecision don’t make any reference to “default actions,”
I mostly mentioned defaultness because it appears in some attempts to precisely specify alternatives to bayesian expected utility maximization. One concrete relation is that one reasonable attempt at specifying what it is that you’ll do when multiple actions are permissible is that you choose the one that’s most ‘default’ (more precisely, if you have a prior on actions, you could choose the one with the highest prior). But if a notion of defaultness isn’t relevant for getting from your (afaict) informal decision rule to a policy, then nvm this!
I also don’t understand what’s unnatural/unprincipled/confused about permissibility or preferential gaps. They seem quite principled to me: I have a strict preference for taking action A over B (/ B is impermissible) only if I’m justified in beliefs according to which I expect A to do better than B.
I’m not sure I understand. Am I right in understanding that permissibility is defined via a notion of strict preferences, and the rest is intended as an informal restatement of the decision rule? In that case, I still feel like I don’t know what having a strict preference or permissibility means — is there some way to translate these things to actions? If the rest is intended as an independent definition of having a strict preference, then I still don’t know how anything relates to action either. (I also have some other issues in that case: I anticipate disliking the distinction between justified and unjustified beliefs being made (in particular, I anticipate thinking that a good belief-haver should just be thinking and acting according to their beliefs); it’s unclear to me what you mean by being justified in some beliefs (eg is this a non-probabilistic notion); are individual beliefs giving you expectations here or are all your beliefs jointly giving you expectations or is some subset of beliefs together giving you expectations; should I think of this expectation that A does better than B as coming from another internal conditional expected utility calculation). I guess maybe I’d like to understand how an action gets chosen from the permissible ones. If we do not in fact feel that all the actions are equal here (if we’d pay something to switch from one to another, say), then it starts to seem unnatural to make a distinction between two kinds of preference in the first place. (This is in contrast to: I feel like I can relate ‘preferences’ kinda concretely to actions in the usual vNM case, at least if I’m allowed to talk about money to resolve the ambiguity between choosing one of two things I’m indifferent between vs having a strict preference.)
Anyway, I think there’s a chance I’d be fine with sometimes thinking that various options are sort of fine in a situation, and I’m maybe even fine with this notion of fineness eg having certain properties under sweetenings of options, but I quite strongly dislike trying to make this notion of fineness correspond to this thing with a universal quantifier over your probability distributions, because it seems to me that (1) it is unhelpful because it (at least if implemented naively) doesn’t solve any of the computational issues (boundedness issues) that are a large part of why I’d entertain such a notion of fineness in the first place, (2) it is completely unprincipled (there’s no reason for this in particular, and the split of uncertainties is unsatisfying), and (3) it plausibly gives disastrous behavior if taken seriously. But idk maybe I can’t really even get behind that notion of fineness, and I’m just confusing it with the somewhat distinct notion of fineness that I use when I buy two different meals to distribute among myself and a friend and tell them that I’m fine with them having either one, which I think is well-reduced to probably having a smaller preference than my friend. Anyway, obviously whether such a notion of fineness is desirable depends on how you want it to relate to other things (in particular, actions), and I’m presently sufficiently unsure about how you want it to relate to these other things to be unsure about whether a suitable such notion exists.
basically everything becomes permissible, which seems highly undesirable
This is a much longer conversation, but briefly: I think it’s ad hoc / putting the cart before the horse to shape our epistemology to fit our intuitions about what decision guidance we should have.
It seems to me like you were like: “why not regiment one’s thinking xyz-ly?” (in your original question), to which I was like “if one regiments one thinking xyz-ly, then it’s an utter disaster” (in that bullet point), and now you’re like “even if it’s an utter disaster, I don’t care”. And I guess my response is that you should care about it being an utter disaster, but I guess I’m confused enough about why you wouldn’t care that it doesn’t make a lot of sense for me to try to write a library of responses.
Appendix with some things about CCT and expected utility maximization and [imprecise probabilities] + maximality that got cut
Precise EV maximization is a special case of [imprecise probabilities] + maximality (namely, the special case where your imprecise probabilities are in fact precise, at least modulo some reasonable assumptions about what things mean), so unless your class of decision rules turns out to be precisely equivalent to the class of decision rules which do precise EV maximization, the CCT does in fact say it contains some bad rules. (And if it did turn out to be equivalent, then I’d be somewhat confused about why we’re talking about it your way, because it’d seem to me like it’d then just be a less nice way to describe the same thing.) And at least on the surface, the class of decision rules does not appear to be equivalent, so the CCT indeed does speak against some rules in this class (and in fact, all rules in this class which cannot be described as precise EV maximization).
If you filled in the details of your maximality-type rule enough to tell me what your policy is — in particular, hypothetically, maybe you’d want to specify sth like the following: what it means for some options to be ‘permissible’ or how an option gets chosen from the ‘permissible options’, potentially something about how current choices relate to past choices, and maybe just what kind of POMDP, causal graph, decision tree, or whatever game setup we’re assuming in the first place — such that your behavior then looks like bayesian expected utility maximization (with some particular probability distribution and some particular utility function), then I guess I’ll no longer be objecting to you using that rule (to be precise: I would no longer be objecting to it for being dominated per the CCT or some such theorem, but I might still object to the psychological implementation of your policy on other grounds).
That said, I think the most straightforward ways [to start from your statement of the maximality rule and to specify some sequential setup and to make the rule precise and to then derive a policy for the sequential setup from the rule] do give you a policy which you would yourself consider dominated though. I can imagine a way to make your rule precise that doesn’t give you a dominated policy that ends up just being ‘anything is permissible as long as you make sure you looked like a bayesian expected utility maximizer at the end of the day’ (I think the rule of Thornley and Petersen is this), but at that point I’m feeling like we’re stressing some purely psychological distinction whose relevance to matters of interest I’m failing to see.
But maybe more importantly, at this point, I’d feel like we’ve lost the plot somewhat. What I intended to say with my original bullet point was more like: we’ve constructed this giant red arrow (i.e., coherence theorems; ok, it’s maybe not that giant in some absolute sense, but imo it is as big as presently existing arrows get for things this precise in a domain this messy) pointing at one kind of structure (i.e., bayesian expected utility maximization) to have ‘your beliefs and actions ultimately correspond to’, and then you’re like “why not this other kind of structure (imprecise probabilities, maximality rules) though?” and then my response was “well, for one, there is the giant red arrow pointing at this other structure, and I don’t know of any arrow pointing at your structure”, and I don’t really know how to see your response as a response to this.
If you buy the CCT’s assumptions, then you literally do have an argument that anything other than precise EV maximization is bad
No, you have an argument that {anything that cannot be represented after the fact as precise EV maximization, with respect to some utility function and distribution} is bad. This doesn’t imply that an agent who maintains imprecise beliefs will do badly.
Maybe you’re thinking something like: “The CCT says that my policy is guaranteed to be Pareto-efficient iff it maximizes EV w.r.t. some distribution. So even if I don’t know which distribution to choose, and even though I’m not guaranteed not to be Pareto-efficient if I follow Maximality, I at least know I don’t violate Pareto-efficiency if do precise EV maximization”?
If so: I’d say that there are several imprecise decision rules that can be represented after the fact as precise EV max w.r.t. some distributions, so the CCT doesn’t rule them out. E.g.:
The minimax regret rule (sec 5.4.2 of Bradley (2012)) is equivalent to EV max w.r.t. the distribution in your representor that induces maximum regret.
The maximin rule (sec 5.4.1) is equivalent to EV max w.r.t. the most pessimistic distribution.
You might say “Then why not just do precise EV max w.r.t. those distributions?” But the whole problem you face as a decision-maker is, how do you decide which distribution? Different distributions recommend different policies. If you endorse precise beliefs, it seems you’ll commit to one distribution that you think best represents your epistemic state. Whereas someone with imprecise beliefs will say: “My epistemic state is not represented by just one distribution. I’ll evaluate the imprecise decision rules based on which decision-theoretic desiderata they satisfy, then apply the most appealing decision rule (or some way of aggregating them) w.r.t. my imprecise beliefs.” If the decision procedure you follow is psychologically equivalent to my previous sentence, then I have no objection to your procedure — I just think it would be misleading to say you endorse precise beliefs in that case.
Sorry, I feel like the point I wanted to make with my original bullet point is somewhat vaguer/different than what you’re responding to. Let me try to clarify what I wanted to do with that argument with a caricatured version of the present argument-branch from my point of view:
your original question (caricatured): “The Sun prayer decision rule is as follows: you pray to the Sun; this makes a certain set of actions seem auspicious to you. Why not endorse the Sun prayer decision rule?”
my bullet point: “Bayesian expected utility maximization has this big red arrow pointing toward it, but the Sun prayer decision rule has no big red arrow pointing toward it.”
your response: “Maybe a few specific Sun prayer decision rules are also pointed to by that red arrow?”
my response: “The arrow does not point toward most Sun prayer decision rules. In fact, it only points toward the ones that are secretly bayesian expected utility maximization. Anyway, I feel like this does very little to address my original point that there is this big red arrow pointing toward bayesian expected utility maximization and no big red arrow pointing toward Sun prayer decision rules.”
(See the appendix to my previous comment for more on this.)
That said, I admit I haven’t said super clearly how the arrow ends up pointing to structuring your psychology in a particular way (as opposed to just pointing at a class of ways to behave). I think I won’t do a better job at this atm than what I said in the second paragraph of my previous comment.
The minimax regret rule (sec 5.4.2 of Bradley (2012)) is equivalent to EV max w.r.t. the distribution in your representor that induces maximum regret.
I’m (inside view) 99.9% sure this will be false/nonsense in a sequential setting. I’m (inside view) 99% sure this is false/nonsense even in the one-shot case. I guess the issue is that different actions get assigned their max regret by different distributions, so I’m not sure what you mean when you talk about the distribution that induces maximum regret. And indeed, it is easy to come up with a case where the action that gets chosen is not best according to any distribution in your set of distributions: let there be one action which is uniformly fine and also for each distribution in the set, let there be an action which is great according to that distribution and disastrous according to every other distribution; the uniformly fine action gets selected, but this isn’t EV max for any distribution in your representor. That said, if we conceive of the decision rule as picking out a single action to perform, then because the decision rule at least takes Pareto improvements, I think a convex optimization argument says that the single action it picks is indeed the maximal EV one according to some distribution (though not necessarily one in your set). However, if we conceive of the decision rule as giving preferences between actions or if we try to use it in some sequential setup, then I’m >95% sure there is no way to see it as EV max (except in some silly way, like forgetting you had preferences in the first place).
The maximin rule (sec 5.4.1) is equivalent to EV max w.r.t. the most pessimistic distribution.
I didn’t think about this as carefully, but >90% that the paragraph above also applies with minor changes.
You might say “Then why not just do precise EV max w.r.t. those distributions?” But the whole problem you face as a decision-maker is, how do you decide which distribution? Different distributions recommend different policies. If you endorse precise beliefs, it seems you’ll commit to one distribution that you think best represents your epistemic state. Whereas someone with imprecise beliefs will say: “My epistemic state is not represented by just one distribution. I’ll evaluate the imprecise decision rules based on which decision-theoretic desiderata they satisfy, then apply the most appealing decision rule (or some way of aggregating them) w.r.t. my imprecise beliefs.” If the decision procedure you follow is psychologically equivalent to my previous sentence, then I have no objection to your procedure — I just think it would be misleading to say you endorse precise beliefs in that case.
I think I agree in some very weak sense. For example, when I’m trying to diagnose a health issue,
I do want to think about which priors and likelihoods to use — it’s not like these things are immediately given to me or something. In this sense, I’m at some point contemplating many possible distributions to use. But I guess we do have some meaningful disagreement left — I guess I take the most appealing decision rule to be more like pure aggregation than you do; I take imprecise probabilities with maximality to be a major step toward madness from doing something that stays closer to expected utility maximization.
my response: “The arrow does not point toward most Sun prayer decision rules. In fact, it only points toward the ones that are secretly bayesian expected utility maximization. Anyway, I feel like this does very little to address my original point that there is this big red arrow pointing toward bayesian expected utility maximization and no big red arrow pointing toward Sun prayer decision rules.”
I don’t really understand your point, sorry. “Big red arrows towards X” only are a problem for doing Y if (1) they tell me that doing Y is inconsistent with doing [the form of X that’s necessary to avoid leaving value on the table]. And these arrows aren’t action-guiding for me unless (2) they tell me which particular variant of X to do. I’ve argued that there is no sense in which either (1) or (2) is true. Further, I think there are various big green arrows towards Y, as sketched in the SEP article and Mogensen paper I linked in the OP, though I understand if these aren’t fully satisfying positive arguments. (I tentatively plan to write such positive arguments up elsewhere.)
I’m just not swayed by vibes-level “arrows” if there isn’t an argument that my approach is leaving value on the table by my lights, or that you have a particular approach that doesn’t do so.
And indeed, it is easy to come up with a case where the action that gets chosen is not best according to any distribution in your set of distributions: let there be one action which is uniformly fine and also for each distribution in the set, let there be an action which is great according to that distribution and disastrous according to every other distribution; the uniformly fine action gets selected, but this isn’t EV max for any distribution in your representor.
Oops sorry, my claim had the implicit assumptions that (1) your representor includes all the convex combinations, and (2) you can use mixed strategies. ((2) is standard in decision theory, and I think (1) is a reasonable assumption — if I feel clueless as to how much I endorse distribution p vs distribution q, it seems weird for me to still be confident that I don’t endorse a mixture of the two.)
If those assumptions hold, I think you can show that the max-regret-minimizing action maximizes EV w.r.t. some distribution in your representor. I don’t have a proof on hand but would welcome counterexamples. In your example, you can check that either the uniformly fine action does best on a mixture distribution, or a mix of the other actions does best (lmk if spelling this out would be helpful).
Oh ok yea that’s a nice setup and I think I know how to prove that claim — the convex optimization argument I mentioned should give that. I still endorse the branch of my previous comment that comes after considering roughly that option though:
That said, if we conceive of the decision rule as picking out a single action to perform, then because the decision rule at least takes Pareto improvements, I think a convex optimization argument says that the single action it picks is indeed the maximal EV one according to some distribution (though not necessarily one in your set). However, if we conceive of the decision rule as giving preferences between actions or if we try to use it in some sequential setup, then I’m >95% sure there is no way to see it as EV max (except in some silly way, like forgetting you had preferences in the first place).
The branch that’s about sequential decision-making, you mean? I’m unconvinced by this too, see e.g. here — I’d appreciate more explicit arguments for this being “nonsense.”
To clarify, I think in this context I’ve only said that the claim “The minimax regret rule (sec 5.4.2 of Bradley (2012)) is equivalent to EV max w.r.t. the distribution in your representor that induces maximum regret” (and maybe the claim after it) was “false/nonsense” — in particular, because it doesn’t make sense to talk about a distribution that induces maximum regret (without reference to a particular action) — which I’m guessing you agree with.
I wanted to say that I endorse the following:
Neither of the two decision rules you mentioned is (in general) consistent with any EV max if we conceive of it as giving your preferences (not just picking out a best option), nor if we conceive of it as telling you what to do on each step of a sequential decision-making setup.
I think basically any setup is an example for either of these claims. Here’s a canonical counterexample for the version with preferences and the max_{actions} min_{probability distributions} EV (i.e., infrabayes) decision rule, i.e. with our preferences corresponding to the min_{probability distributions} EV ranking:
Let a and c be actions and let b be flipping a fair coin and then doing a or c depending on the outcome. It is easy to construct a case where the max-min rule strictly prefers b to a and also strictly prefers b to c, and indeed where this preference is strong enough that the rule still strictly prefers b to a small enough sweetening of a and also still prefers b to a small enough sweetening of c (in fact, a generic setup will have such a triple). Call these sweetenings a+ and c+ (think of these as a-but-you-also-get-one-cent or a-but-you-also-get-one-extra-moment-of-happiness or whatever; the important thing is that all utility functions under consideration should consider this one cent or one extra moment of happiness or whatever a positive). However, every EV max rule (that cares about the one cent) will strictly disprefer b to at least one of a+ or c+, because if that weren’t the case, the EV max rule would need to weakly prefer b over a coinflip between a+ and c+, but this is just saying that the EV max rule weakly prefers b to b+, which contradicts with it caring about sweetening. So these min preferences are incompatible with maximizing any EV.
There is a canonical way in which a counterexample in preference-land can be turned into a counterexample in sequential-decision-making-land: just make the “sequential” setup really just be a two-step game where you first randomly pick a pair of actions to give the agent a choice between, and then the agent makes some choice. The game forces the max min agent to “reveal its preferences” sufficiently for its policy to be revealed to be inconsistent with EV maxing. (This is easiest to see if the agent is forced to just make a binary choice. But it’s still true even if you avoid the strictly binary choice being forced upon the agent by saying that the agent still has access to (internal) randomization.)
Regarding the Thornley paper you link: I’ve said some stuff about it in my earlier comments; my best guess for what to do next would be to prove some theorem about behavior that doesn’t make explicit use of a completeness assumption, but also it seems likely that this would fail to relate sufficiently to our central disagreements to be worthwhile. I guess I’m generally feeling like I might bow out of this written conversation soon/now, sorry! But I’d be happy to talk more about this synchronously — if you’d like to schedule a meeting, feel free to message me on the LW messenger.
It seems to me like you were like: “why not regiment one’s thinking xyz-ly?” (in your original question), to which I was like “if one regiments one thinking xyz-ly, then it’s an utter disaster” (in that bullet point), and now you’re like “even if it’s an utter disaster, I don’t care
My claim is that your notion of “utter disaster” presumes that a consequentialist under deep uncertainty has some sense of what to do, such that they don’t consider ~everything permissible. This begs the question against severe imprecision. I don’t really see why we should expect our pretheoretic intuitions about the verdicts of a value system as weird as impartial longtermist consequentialism, under uncertainty as severe as ours, to be a guide to our epistemics.
I agree that intuitively it’s a very strange and disturbing verdict that ~everything is permissible! But that seems to be the fault of impartial longtermist consequentialism, not imprecise beliefs.
I still feel like I don’t know what having a strict preference or permissibility means — is there some way to translate these things to actions?
As an aspiring rational agent, I’m faced with lots of options. What do I do? Ideally I’d like to just be able to say which option is “best” and do that. If I have a complete ordering over the expected utilities of the options, then clearly the best option is the expected utility-maximizing one. If I don’t have such a complete ordering, things are messier. I start by ruling out dominated options (as Maximality does). The options in the remaining set are all “permissible” in the sense that I haven’t yet found a reason to rule them out.
I do of course need to choose an action eventually. But I have some decision-theoretic uncertainty. So, given the time to do so, I want to deliberate about which ways of narrowing down this set of options further seem most reasonable (i.e., satisfy principles of rational choice I find compelling).
(Basically I think EU maximization is a special case of “narrow down the permissible set as much as you can via principles of rational choice,[1] then just pick something from whatever remains.” It’s so straightforward in this case that we don’t even recognize we’re identifying a (singleton) “permissible set.”)
Now, maybe you’d just want to model this situation like: “For embedded agents, ‘deliberation’ is just an option like any other. Your revealed strict preference is to deliberate about rational choice.” I might be fine with this model.[2] But:
For the purposes of discussing how {the VOI of deliberation about rational choice} compares to {the value of going with our current “best guess” in some sense}, I find it conceptually helpful to think of “choosing to deliberate about rational choice” as qualitatively different from other choices.
The procedure I use to decide to deliberate about rational choice principles is not “I maximize EV w.r.t. some beliefs,” it’s “I see that my permissible set is not a singleton, I want more action-guidance, so I look for more action-guidance.”
Though I think once you open the door to this embedded agency stuff, reasoning about rational choice in general becomes confusing even for people who like precise EV max.
I agree that any precise EV maximization (which imo = any good policy) is consistent with some corresponding maximality rule — in particular, with the maximality rule with the very same single precise probability distribution and the same utility function (at least modulo some reasonable assumptions about what ‘permissibility’ means). Any good policy is also consistent with any maximality rule that includes its probability distribution as one distribution in the set (because this guarantees that the best-according-to-the-precise-EV-maximization action is always permitted), as well as with any maximality rule that makes anything permissible. But I don’t see how any of this connects much to whether there is a positive case for precise EV maximization? If you buy the CCT’s assumptions, then you literally do have an argument that anything other than precise EV maximization is bad, right, which does sound like a positive case for precise EV maximization (though not directly in the psychological sense)?
Ok, maybe you’re saying that the CCT doesn’t obviously provide an argument for it being good to restructure your thinking into literally maintaining some huge probability distribution on ‘outcomes’ and explicitly maintaining some function from outcomes to the reals and explicitly picking actions such that the utility conditional on these actions having been taken by you is high (or whatever)? I agree that trying to do this very literally is a bad idea, eg because you can’t fit all possible worlds (or even just one world) in your head, eg because you don’t know likelihoods given hypotheses as you’re not logically omniscient, eg because there are difficulties with finding yourself in the world, etc — when taken super literally, the whole shebang isn’t compatible with the kinds of good reasoning we actually can do and do do and want to do. I should say that I didn’t really track the distinction between the psychological and behavioral question carefully in my original response, and had I recognized you to be asking only about the psychological aspect, I’d perhaps have focused on that more carefully in my original answer. Still, I do think the CCT has something to say about the psychological aspect as well — it provides some pro tanto reason to reorganize aspects of one’s reasoning to go some way toward assigning coherent numbers to propositions and thinking of decisions as having some kinds of outcomes and having a schema for assigning a number to each outcome and picking actions that lead to high expectations of this number. This connection is messy, but let me try to say something about what it might look like (I’m not that happy with the paragraph I’m about to give and I feel like one could write a paper at this point instead). The CCT says that if you ‘were wise’ — something like ‘if you were to be ultimately content with what you did when you look back at your life’ — your actions would need to be a particular way (from the outside). Now, you’re pretty interested in being content with your actions (maybe just instrumentally, because maybe you think that has to do with doing more good or being better). In some sense, you know you can’t be fully content with them (because of the reasons above). But it makes sense to try to move toward being more content with your actions. One very reasonable way to achieve this is to incorporate some structure into your thinking that makes your behavior come closer to having these desired properties. This can just look like the usual: doing a bayesian calculation to diagnose a health problem, doing an EV calculation to decide which research project to work on, etc..
(There’s a chance you take there to be another sense in which we can ask about the reasonableness of expected utility maximization that’s distinct from the question that broadly has to do with characterizing behavior and also distinct from the question that has to do with which psychology one ought to choose for oneself — maybe something like what’s fundamentally principled or what one ought to do here in some other sense — and you’re interested in that thing. If so, I hope what I’ve said can be translated into claims about how the CCT would relate to that third thing.)
Anyway, If the above did not provide a decent response to what you said, then it might be worthwhile to also look at the appendix (which I ended up deprecating after understanding that you might only be interested in the psychological aspect of decision-making). In that appendix, I provide some more discussion of the CCT saying that [maximality rules which aren’t behaviorally equivalent to expected utility maximization are dominated]. I also provide some discussion recentering the broader point I wanted to make with that bullet point that CCT-type stuff is a big red arrow pointing toward expected utility maximization, whereas no remotely-as-big red arrow is known for [imprecise probabilities + maximality].
For example, preferential gaps are sometimes justified by appeals to cases like: “you’re moving to another country. you can take with you your Fabergé egg xor your wedding album. you feel like each is very cool, and in a different way, and you feel like you are struggling to compare the two. given this, it feels fine for you to flip a coin to decide which one (or to pick the one on the left, or to ‘just pick one’) instead of continuing to think about it. now you remember you have 10 dollars inside the egg. it still seems fine to flip a coin to decide which one to take (or to pick the one on the left, or to ‘just pick one’).”. And then one might say one needs preferential gaps to capture this. But someone sorta trying to maximize expected utility might think about this as: “i’ll pick a randomization policy for cases where i’m finding two things hard to compare. i think this has good EV if one takes deliberation costs into account, with randomization maybe being especially nice given that my utility is concave in the quantities of various things.”.
I mostly mentioned defaultness because it appears in some attempts to precisely specify alternatives to bayesian expected utility maximization. One concrete relation is that one reasonable attempt at specifying what it is that you’ll do when multiple actions are permissible is that you choose the one that’s most ‘default’ (more precisely, if you have a prior on actions, you could choose the one with the highest prior). But if a notion of defaultness isn’t relevant for getting from your (afaict) informal decision rule to a policy, then nvm this!
I’m not sure I understand. Am I right in understanding that permissibility is defined via a notion of strict preferences, and the rest is intended as an informal restatement of the decision rule? In that case, I still feel like I don’t know what having a strict preference or permissibility means — is there some way to translate these things to actions? If the rest is intended as an independent definition of having a strict preference, then I still don’t know how anything relates to action either. (I also have some other issues in that case: I anticipate disliking the distinction between justified and unjustified beliefs being made (in particular, I anticipate thinking that a good belief-haver should just be thinking and acting according to their beliefs); it’s unclear to me what you mean by being justified in some beliefs (eg is this a non-probabilistic notion); are individual beliefs giving you expectations here or are all your beliefs jointly giving you expectations or is some subset of beliefs together giving you expectations; should I think of this expectation that A does better than B as coming from another internal conditional expected utility calculation). I guess maybe I’d like to understand how an action gets chosen from the permissible ones. If we do not in fact feel that all the actions are equal here (if we’d pay something to switch from one to another, say), then it starts to seem unnatural to make a distinction between two kinds of preference in the first place. (This is in contrast to: I feel like I can relate ‘preferences’ kinda concretely to actions in the usual vNM case, at least if I’m allowed to talk about money to resolve the ambiguity between choosing one of two things I’m indifferent between vs having a strict preference.)
Anyway, I think there’s a chance I’d be fine with sometimes thinking that various options are sort of fine in a situation, and I’m maybe even fine with this notion of fineness eg having certain properties under sweetenings of options, but I quite strongly dislike trying to make this notion of fineness correspond to this thing with a universal quantifier over your probability distributions, because it seems to me that (1) it is unhelpful because it (at least if implemented naively) doesn’t solve any of the computational issues (boundedness issues) that are a large part of why I’d entertain such a notion of fineness in the first place, (2) it is completely unprincipled (there’s no reason for this in particular, and the split of uncertainties is unsatisfying), and (3) it plausibly gives disastrous behavior if taken seriously. But idk maybe I can’t really even get behind that notion of fineness, and I’m just confusing it with the somewhat distinct notion of fineness that I use when I buy two different meals to distribute among myself and a friend and tell them that I’m fine with them having either one, which I think is well-reduced to probably having a smaller preference than my friend. Anyway, obviously whether such a notion of fineness is desirable depends on how you want it to relate to other things (in particular, actions), and I’m presently sufficiently unsure about how you want it to relate to these other things to be unsure about whether a suitable such notion exists.
It seems to me like you were like: “why not regiment one’s thinking xyz-ly?” (in your original question), to which I was like “if one regiments one thinking xyz-ly, then it’s an utter disaster” (in that bullet point), and now you’re like “even if it’s an utter disaster, I don’t care”. And I guess my response is that you should care about it being an utter disaster, but I guess I’m confused enough about why you wouldn’t care that it doesn’t make a lot of sense for me to try to write a library of responses.
Appendix with some things about CCT and expected utility maximization and [imprecise probabilities] + maximality that got cut
Precise EV maximization is a special case of [imprecise probabilities] + maximality (namely, the special case where your imprecise probabilities are in fact precise, at least modulo some reasonable assumptions about what things mean), so unless your class of decision rules turns out to be precisely equivalent to the class of decision rules which do precise EV maximization, the CCT does in fact say it contains some bad rules. (And if it did turn out to be equivalent, then I’d be somewhat confused about why we’re talking about it your way, because it’d seem to me like it’d then just be a less nice way to describe the same thing.) And at least on the surface, the class of decision rules does not appear to be equivalent, so the CCT indeed does speak against some rules in this class (and in fact, all rules in this class which cannot be described as precise EV maximization).
If you filled in the details of your maximality-type rule enough to tell me what your policy is — in particular, hypothetically, maybe you’d want to specify sth like the following: what it means for some options to be ‘permissible’ or how an option gets chosen from the ‘permissible options’, potentially something about how current choices relate to past choices, and maybe just what kind of POMDP, causal graph, decision tree, or whatever game setup we’re assuming in the first place — such that your behavior then looks like bayesian expected utility maximization (with some particular probability distribution and some particular utility function), then I guess I’ll no longer be objecting to you using that rule (to be precise: I would no longer be objecting to it for being dominated per the CCT or some such theorem, but I might still object to the psychological implementation of your policy on other grounds).
That said, I think the most straightforward ways [to start from your statement of the maximality rule and to specify some sequential setup and to make the rule precise and to then derive a policy for the sequential setup from the rule] do give you a policy which you would yourself consider dominated though. I can imagine a way to make your rule precise that doesn’t give you a dominated policy that ends up just being ‘anything is permissible as long as you make sure you looked like a bayesian expected utility maximizer at the end of the day’ (I think the rule of Thornley and Petersen is this), but at that point I’m feeling like we’re stressing some purely psychological distinction whose relevance to matters of interest I’m failing to see.
But maybe more importantly, at this point, I’d feel like we’ve lost the plot somewhat. What I intended to say with my original bullet point was more like: we’ve constructed this giant red arrow (i.e., coherence theorems; ok, it’s maybe not that giant in some absolute sense, but imo it is as big as presently existing arrows get for things this precise in a domain this messy) pointing at one kind of structure (i.e., bayesian expected utility maximization) to have ‘your beliefs and actions ultimately correspond to’, and then you’re like “why not this other kind of structure (imprecise probabilities, maximality rules) though?” and then my response was “well, for one, there is the giant red arrow pointing at this other structure, and I don’t know of any arrow pointing at your structure”, and I don’t really know how to see your response as a response to this.
No, you have an argument that {anything that cannot be represented after the fact as precise EV maximization, with respect to some utility function and distribution} is bad. This doesn’t imply that an agent who maintains imprecise beliefs will do badly.
Maybe you’re thinking something like: “The CCT says that my policy is guaranteed to be Pareto-efficient iff it maximizes EV w.r.t. some distribution. So even if I don’t know which distribution to choose, and even though I’m not guaranteed not to be Pareto-efficient if I follow Maximality, I at least know I don’t violate Pareto-efficiency if do precise EV maximization”?
If so: I’d say that there are several imprecise decision rules that can be represented after the fact as precise EV max w.r.t. some distributions, so the CCT doesn’t rule them out. E.g.:
The minimax regret rule (sec 5.4.2 of Bradley (2012)) is equivalent to EV max w.r.t. the distribution in your representor that induces maximum regret.
The maximin rule (sec 5.4.1) is equivalent to EV max w.r.t. the most pessimistic distribution.
You might say “Then why not just do precise EV max w.r.t. those distributions?” But the whole problem you face as a decision-maker is, how do you decide which distribution? Different distributions recommend different policies. If you endorse precise beliefs, it seems you’ll commit to one distribution that you think best represents your epistemic state. Whereas someone with imprecise beliefs will say: “My epistemic state is not represented by just one distribution. I’ll evaluate the imprecise decision rules based on which decision-theoretic desiderata they satisfy, then apply the most appealing decision rule (or some way of aggregating them) w.r.t. my imprecise beliefs.” If the decision procedure you follow is psychologically equivalent to my previous sentence, then I have no objection to your procedure — I just think it would be misleading to say you endorse precise beliefs in that case.
Sorry, I feel like the point I wanted to make with my original bullet point is somewhat vaguer/different than what you’re responding to. Let me try to clarify what I wanted to do with that argument with a caricatured version of the present argument-branch from my point of view:
your original question (caricatured): “The Sun prayer decision rule is as follows: you pray to the Sun; this makes a certain set of actions seem auspicious to you. Why not endorse the Sun prayer decision rule?”
my bullet point: “Bayesian expected utility maximization has this big red arrow pointing toward it, but the Sun prayer decision rule has no big red arrow pointing toward it.”
your response: “Maybe a few specific Sun prayer decision rules are also pointed to by that red arrow?”
my response: “The arrow does not point toward most Sun prayer decision rules. In fact, it only points toward the ones that are secretly bayesian expected utility maximization. Anyway, I feel like this does very little to address my original point that there is this big red arrow pointing toward bayesian expected utility maximization and no big red arrow pointing toward Sun prayer decision rules.”
(See the appendix to my previous comment for more on this.)
That said, I admit I haven’t said super clearly how the arrow ends up pointing to structuring your psychology in a particular way (as opposed to just pointing at a class of ways to behave). I think I won’t do a better job at this atm than what I said in the second paragraph of my previous comment.
I’m (inside view) 99.9% sure this will be false/nonsense in a sequential setting. I’m (inside view) 99% sure this is false/nonsense even in the one-shot case. I guess the issue is that different actions get assigned their max regret by different distributions, so I’m not sure what you mean when you talk about the distribution that induces maximum regret. And indeed, it is easy to come up with a case where the action that gets chosen is not best according to any distribution in your set of distributions: let there be one action which is uniformly fine and also for each distribution in the set, let there be an action which is great according to that distribution and disastrous according to every other distribution; the uniformly fine action gets selected, but this isn’t EV max for any distribution in your representor. That said, if we conceive of the decision rule as picking out a single action to perform, then because the decision rule at least takes Pareto improvements, I think a convex optimization argument says that the single action it picks is indeed the maximal EV one according to some distribution (though not necessarily one in your set). However, if we conceive of the decision rule as giving preferences between actions or if we try to use it in some sequential setup, then I’m >95% sure there is no way to see it as EV max (except in some silly way, like forgetting you had preferences in the first place).
I didn’t think about this as carefully, but >90% that the paragraph above also applies with minor changes.
I think I agree in some very weak sense. For example, when I’m trying to diagnose a health issue, I do want to think about which priors and likelihoods to use — it’s not like these things are immediately given to me or something. In this sense, I’m at some point contemplating many possible distributions to use. But I guess we do have some meaningful disagreement left — I guess I take the most appealing decision rule to be more like pure aggregation than you do; I take imprecise probabilities with maximality to be a major step toward madness from doing something that stays closer to expected utility maximization.
I don’t really understand your point, sorry. “Big red arrows towards X” only are a problem for doing Y if (1) they tell me that doing Y is inconsistent with doing [the form of X that’s necessary to avoid leaving value on the table]. And these arrows aren’t action-guiding for me unless (2) they tell me which particular variant of X to do. I’ve argued that there is no sense in which either (1) or (2) is true. Further, I think there are various big green arrows towards Y, as sketched in the SEP article and Mogensen paper I linked in the OP, though I understand if these aren’t fully satisfying positive arguments. (I tentatively plan to write such positive arguments up elsewhere.)
I’m just not swayed by vibes-level “arrows” if there isn’t an argument that my approach is leaving value on the table by my lights, or that you have a particular approach that doesn’t do so.
Oops sorry, my claim had the implicit assumptions that (1) your representor includes all the convex combinations, and (2) you can use mixed strategies. ((2) is standard in decision theory, and I think (1) is a reasonable assumption — if I feel clueless as to how much I endorse distribution p vs distribution q, it seems weird for me to still be confident that I don’t endorse a mixture of the two.)
If those assumptions hold, I think you can show that the max-regret-minimizing action maximizes EV w.r.t. some distribution in your representor. I don’t have a proof on hand but would welcome counterexamples. In your example, you can check that either the uniformly fine action does best on a mixture distribution, or a mix of the other actions does best (lmk if spelling this out would be helpful).
Oh ok yea that’s a nice setup and I think I know how to prove that claim — the convex optimization argument I mentioned should give that. I still endorse the branch of my previous comment that comes after considering roughly that option though:
The branch that’s about sequential decision-making, you mean? I’m unconvinced by this too, see e.g. here — I’d appreciate more explicit arguments for this being “nonsense.”
To clarify, I think in this context I’ve only said that the claim “The minimax regret rule (sec 5.4.2 of Bradley (2012)) is equivalent to EV max w.r.t. the distribution in your representor that induces maximum regret” (and maybe the claim after it) was “false/nonsense” — in particular, because it doesn’t make sense to talk about a distribution that induces maximum regret (without reference to a particular action) — which I’m guessing you agree with.
I wanted to say that I endorse the following:
Neither of the two decision rules you mentioned is (in general) consistent with any EV max if we conceive of it as giving your preferences (not just picking out a best option), nor if we conceive of it as telling you what to do on each step of a sequential decision-making setup.
I think basically any setup is an example for either of these claims. Here’s a canonical counterexample for the version with preferences and the max_{actions} min_{probability distributions} EV (i.e., infrabayes) decision rule, i.e. with our preferences corresponding to the min_{probability distributions} EV ranking:
Let a and c be actions and let b be flipping a fair coin and then doing a or c depending on the outcome. It is easy to construct a case where the max-min rule strictly prefers b to a and also strictly prefers b to c, and indeed where this preference is strong enough that the rule still strictly prefers b to a small enough sweetening of a and also still prefers b to a small enough sweetening of c (in fact, a generic setup will have such a triple). Call these sweetenings a+ and c+ (think of these as a-but-you-also-get-one-cent or a-but-you-also-get-one-extra-moment-of-happiness or whatever; the important thing is that all utility functions under consideration should consider this one cent or one extra moment of happiness or whatever a positive). However, every EV max rule (that cares about the one cent) will strictly disprefer b to at least one of a+ or c+, because if that weren’t the case, the EV max rule would need to weakly prefer b over a coinflip between a+ and c+, but this is just saying that the EV max rule weakly prefers b to b+, which contradicts with it caring about sweetening. So these min preferences are incompatible with maximizing any EV.
There is a canonical way in which a counterexample in preference-land can be turned into a counterexample in sequential-decision-making-land: just make the “sequential” setup really just be a two-step game where you first randomly pick a pair of actions to give the agent a choice between, and then the agent makes some choice. The game forces the max min agent to “reveal its preferences” sufficiently for its policy to be revealed to be inconsistent with EV maxing. (This is easiest to see if the agent is forced to just make a binary choice. But it’s still true even if you avoid the strictly binary choice being forced upon the agent by saying that the agent still has access to (internal) randomization.)
Regarding the Thornley paper you link: I’ve said some stuff about it in my earlier comments; my best guess for what to do next would be to prove some theorem about behavior that doesn’t make explicit use of a completeness assumption, but also it seems likely that this would fail to relate sufficiently to our central disagreements to be worthwhile. I guess I’m generally feeling like I might bow out of this written conversation soon/now, sorry! But I’d be happy to talk more about this synchronously — if you’d like to schedule a meeting, feel free to message me on the LW messenger.
My claim is that your notion of “utter disaster” presumes that a consequentialist under deep uncertainty has some sense of what to do, such that they don’t consider ~everything permissible. This begs the question against severe imprecision. I don’t really see why we should expect our pretheoretic intuitions about the verdicts of a value system as weird as impartial longtermist consequentialism, under uncertainty as severe as ours, to be a guide to our epistemics.
I agree that intuitively it’s a very strange and disturbing verdict that ~everything is permissible! But that seems to be the fault of impartial longtermist consequentialism, not imprecise beliefs.
As an aspiring rational agent, I’m faced with lots of options. What do I do? Ideally I’d like to just be able to say which option is “best” and do that. If I have a complete ordering over the expected utilities of the options, then clearly the best option is the expected utility-maximizing one. If I don’t have such a complete ordering, things are messier. I start by ruling out dominated options (as Maximality does). The options in the remaining set are all “permissible” in the sense that I haven’t yet found a reason to rule them out.
I do of course need to choose an action eventually. But I have some decision-theoretic uncertainty. So, given the time to do so, I want to deliberate about which ways of narrowing down this set of options further seem most reasonable (i.e., satisfy principles of rational choice I find compelling).
(Basically I think EU maximization is a special case of “narrow down the permissible set as much as you can via principles of rational choice,[1] then just pick something from whatever remains.” It’s so straightforward in this case that we don’t even recognize we’re identifying a (singleton) “permissible set.”)
Now, maybe you’d just want to model this situation like: “For embedded agents, ‘deliberation’ is just an option like any other. Your revealed strict preference is to deliberate about rational choice.” I might be fine with this model.[2] But:
For the purposes of discussing how {the VOI of deliberation about rational choice} compares to {the value of going with our current “best guess” in some sense}, I find it conceptually helpful to think of “choosing to deliberate about rational choice” as qualitatively different from other choices.
The procedure I use to decide to deliberate about rational choice principles is not “I maximize EV w.r.t. some beliefs,” it’s “I see that my permissible set is not a singleton, I want more action-guidance, so I look for more action-guidance.”
“Achieve Pareto-efficiency” (as per the CCT) is one example of such a principle.
Though I think once you open the door to this embedded agency stuff, reasoning about rational choice in general becomes confusing even for people who like precise EV max.