Good question. I’m working on a post on what moral uncertainty actually is (which I’ll hopefully publish tomorrow), and your question has made me realise it’d be worth having a section on the matter of “risk vs (Knightian) uncertainty” in there.
Personally, I was effectively using “uncertainty” in a sense that incorporates both of the concepts you refer to, though I hadn’t been thinking explicitly about the matter. (And some googling suggests academic work on moral uncertainty almost never mentions the potential distinction. When the term “moral risk” is used, it just means the subset of moral uncertainty where one may be doing something bad, rather than being set against something like Knightian uncertainty.)
This distinction is something I feel somewhat confused about, and it seems to me that confusion/disagreement about the distinction is fairly common more widely, including among LW/EA-related communities. E.g., the Wikipedia article on Knightian uncertainty says: “The difference between predictable variation and unpredictable variation is one of the fundamental issues in the philosophy of probability, and different probability interpretations treat predictable and unpredictable variation differently. The debate about the distinction has a long history.”
But here are some of my current (up for debate) thoughts, which aren’t meant to be authoritative or convincing. (This turned out to be a long comment, because I used writing it as a chance to try to work through these ideas, and because my thinking on this still isn’t neatly crystallised, and because I’d be interested in people’s thoughts, as that may inform how I discuss this matter in that other post I’m working on.)
Terms
My new favourite source on how the terms “risk” and “uncertainty” should probably (in my view) be used is this, from Ozzie Gooen. Ozzie notes the distinction/definitions you mention being common in business and finance, but then writes:
Disagreeing with these definitions are common dictionaries and large parts of science and mathematics. In the Merriam-Webster dictionary, every definition of ‘risk’ is explicitly about possible negative events, not about general things with probability distributions. (https://www.merriam-webster.com/dictionary/risk)
There is even a science explicitly called “uncertainty quantification”, but none explicitly called “risk quantification”.
This is obviously something of a mess. Some business people get confused with mathematical quantifications of uncertainty, but other people would be confused by quantifications of socially positive “risks”.
Ozzie goes on to say:
Douglas Hubbard came up with his own definitions of uncertainty and risk, which are what inspired a very similar set of definitions on Wikipedia (the discussion page specifically mentions this link).
From Wikipedia:
Uncertainty The lack of certainty. A state of having limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome. Measurement of uncertainty A set of possible states or outcomes where probabilities are assigned to each possible state or outcome — this also includes the application of a probability density function to continuous variables. Risk A state of uncertainty where some possible outcomes have an undesired effect or significant loss. Measurement of risk A set of measured uncertainties where some possible outcomes are losses, and the magnitudes of those losses — this also includes loss functions over continuous variables.
So according to these definitions, risk is essentially a strict subset of uncertainty.
(Douglas Hubbard is the author of How to Measure Anything, which I’d highly recommend for general concepts and ways of thinking useful across many areas. Great summary here.)
That’s how I’d also want to use those terms, to minimise confusion (despite still causing it among some people from e.g. business and finance).
Concepts
But that was just a matter of what words are probably least likely to lead to conclusion. A separate point is whether I was talking about what you mean by “risk” or what you mean by “uncertainty” (i.e., “Knightianuncertainty”). I do think a distinction between these concepts “carves reality at the joints” to some extent—it points in the direction of something real and worth paying attention to—but I also (currently) think it’s too binary, and can be misleading. (Again, all of this comment up for debate, and I’d be interested in people checking my thinking!)
Here’s one description of the distinction (from this article):
In the case of risk, the outcome is unknown, but the probability distribution governing that outcome is known. Uncertainty, on the other hand, is characterised by both an unknown outcome and an unknown probability distribution. For risk, these chances are taken to be objective, whereas for uncertainty, they are subjective. Consider betting with a friend by rolling a die. If one rolls at least a four, one wins 30 Euros (or Pounds, Dollars, Yen, Republic Dataries, Bitcoins, etc.). If one rolls lower, one loses. If the die is unbiased, one’s decision to accept the bet is taken with the knowledge that one has a 50 per cent chance of winning and losing. This situation is characterised by risk. However, if the die has an unknown bias, the situation is characterised by uncertainty. The latter applies to all situations in which one knows that there is a chance of winning and losing but has no information on the exact distribution of these chances.”
This sort of example is common, and it seems to me that it’d make more sense to not see a difference in kind between the type of estimate/probability/uncertainty, but a difference of degree of our confidence in/basis for that estimate/probability. And this seems to be a fairly common view, at least among people using a Bayesian framework. E.g., Wikipedia states:
Bayesian approaches to probability treat it as a degree of belief and thus they do not draw a distinction between risk and a wider concept of uncertainty: they deny the existence of Knightian uncertainty. They would model uncertain probabilities with hierarchical models, i.e. where the uncertain probabilities are modelled as distributions whose parameters are themselves drawn from a higher-level distribution (hyperpriors)
And in the article on Knightian uncertainty, Wikipedia says “Taleb asserts that Knightian risk does not exist in the real world, and instead finds gradations of computable risk”. I haven’t read anything by Taleb and can’t confirm that that’s his view, but that view makes a lot of sense to me.
For example, in the typical examples, we can never really know with certainty that a coin/dice/whatever is unbiased. So it can’t be the case that “the probability distribution governing that outcome is known”, except in the sense of being known probabilistically, in which case it’s not a totally different situation to a case of an “unknown” distribution, where we still may have some scrap of evidence or meaningful prior, and even if not, we can use some form of uninformative prior (e.g., uniform prior; it seems to me that uninformative priors are another debated topic, and my knowledge there is quite limited).
Something that seems relevant from here is a great quote from Tetlock, along the lines of (I can’t remember or find his exact phrasing) lots of people claiming you can’t predict x, you can’t predict y, these things are simply too unforeseeable, and Tetlock deciding to make a reference class for “Things people say you can’t predict”, with this leading to fairly good predictions. (I think this was from Superforecasting, but it might’ve been Expert Political Judgement. If anyone can remember/find the quote, please let me know.)
Usefulness of a distinction
But as I said, I do think the distinction still points in the direction of something important (I’d just want it to do so in terms of quantitative degrees, rather than qualitative kinds). Holden of GiveWell (at the time) had some relevant posts that seem quite interesting and generated a lot of discussion, e.g. this and this.
One reason I think it’s useful to consider our confidence in our estimates is the optimizer’s curse (also discussed here). If we’re choosing from a set of things based on which one we predict will be “best” (e.g., the most cost-effectiveness of a set of charities), we should expect to be disappointed, and this issue is more pronounced the more “uncertain” our estimates are (not as in “probabilities closer to 0”, but as in “Probabilities we have less basis for”). (I’m not sure how directly relevant the optimizer’s curse is for moral uncertainty, though. Maybe something more general like regression to the mean is more relevant.)
But I don’t think issues like that mean different types/levels of uncertainty need to be treated in fundamentally different ways. We can just do things like adjusting our initial estimates downwards based on our knowledge of the optimizer’s curse. Or, relatedly, model combination and adjustment, which I’ve had a go at for the Devon example in this Guesstimate model.
Concepts I find very helpful here are credence resilience and (relatedly) a distinction between the balance of the evidence and the weight of the evidence (explained here). These (I think) allow us to discuss the sort of thing “risk vs Knightian uncertainty” is trying to get at, but in a less binary way, and in a way that more clearly highlights how we should respond and other implications (e.g., that value of information will typically be higher when our credences are less “resilient”, which I’ll discuss a bit in my upcoming post on relating value of information to moral uncertainty).
Hope that wall of text (sorry!) helps clarify my current thinking on these issues. And as I said earlier, I’d be interested in people’s thoughts on what I’ve said here, to inform how I (hopefully far more concisely) discuss this matter in my later post.
To be specific regarding the model combination point (which is probably the way in which the risk vs Knightian uncertainty distinction is most likely to be relevant to moral uncertainty): I now very tentatively think it might make sense to see whatever “approach” we use to moral uncertainty (whether accounting for empirical uncertainty or not) as one “model”, and have some other model(s) as well. These other models could be an alternative approach to moral uncertainty, or a sort of “common sense” moral theory (of a form that can’t be readily represented in the e.g. MEC model, or that Devon thinks shouldn’t be represented in that model because he’s uncertain about that model), or of something like “Don’t do anything too irreversible because there might be some better moral theory out there that we haven’t heard of yet”.
Then we could combine the two (or more) models, with the option of weighting them differently if we have different levels of confidence in each one. (E.g., because we’re suspicious of the explicit MEC approach. Or on the other hand because we might think our basic, skipping-to-the-conclusion intuitions can’t be trusted much at all on matters like this, and that a more regimented approach that disaggregates/factors things out is more reliable). This is what I had a go at showing in this model.
For example, I think I’d personally find it quite plausible to take the results of (possibly normalised) MEC/MEC-E quite seriously, but to still think there’s a substantial chance of unknown unknowns, making me want to combine those results with something like a “Try to avoid extremely counter-common-sense actions or extremely irreversible consequences” model.
But I only thought of this when trying to think through how to respond to your comment (and thus re-reading things like the model combination post and the sequence vs cluster thinking post), and I haven’t seen something like it discussed in the academic work on moral uncertainty I’ve read, so all of the above is just putting an idea out there for discussion.
Thanks and lots to think about there and it has been helpful, and I think as I digest and nibble more it will provide greater understanding.
For example, I think I’d personally find it quite plausible to take the results of (possibly normalised) MEC/MEC-E quite seriously, but to still think there’s a substantial chance of unknown unknowns, making me want to combine those results with something like a “Try to avoid extremely counter-common-sense actions or extremely irreversible consequences” model.
That resonates well for me. Particularly that element of rule versus maximizing/optimized approach. Not quite sure how or where that fits—and suspect this is a life-time effort to get close to fully working though and opportunities to subdivide areas (which then required all those “how do we classify...” aspects) effectively might be good.
With regards to rules, I think there is also something of an uncertainty reducing role in that rules will increase predictability of external actions. This is not a well thought out idea for me but seems correct, at least in a number of possible situations.
With regards to rules, I think there is also something of an uncertainty reducing role in that rules will increase predictability of external actions. This is not a well thought out idea for me but seems correct, at least in a number of possible situations.
Are you talking in some like game-theoretic terms, like how pre-committing to one policy can make it easier for others to plan with that in mind, for cooperation to be achieved, to avoid extortion, etc.?
If so, that seems plausible and interesting (I hadn’t been explicitly thinking about that).
But I’d also guess that that benefit could be captured by the typical moral uncertainty approaches (whether they explicitly account for empirical uncertainty or not), as long as some theories you have credence in are at least partly consequentialist. (I.e., you may not need to use model combination and adjustment to capture this benefit, though I see no reason why it’s incompatible with model combination and adjustment either.)
Specifically, I’m thinking that the theories you have credence in could give higher choice-worthiness scores to actions that stick closer to what’s typical or you or of some broader group (e.g., all humans), or that stick closer to a policy you selected with a prior action, because of the benefits to cooperation/planning/etc. (In the “accounting for empirical uncertainty”, tweak that to the theories valuing those benefits as outcomes, and you predicting those actions will likely lead to those outcomes.)
Or you could set things up so there’s another “action” to choose from which represents “Select X policy, and sticking to it for the next few years except in Y subset of cases, in which situations you can run MEC again to see what to do”. Then it’d be something like evaluating whether to pre-commit to a form of rule utilitarianism in most cases.
But again, this is just me spit-balling—I haven’t seen this sort of thing discussed in the academic literature.
Not really a game-theoretic concept for me. The thought largely stems from an article I read over 30 years ago “The Origins of Predictable Behavior”. As I recall, it is a largely Bayesian argument for how humans started to evolve rule. One aspect (assuming I am recalling correctly) was the rule took the entire individual calculation out of the picture: we just follow the rule and don’t make any effort to optimize under certain conditions.
I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture.
I’ve tried to add a bit more but have deleted and retried now about 10 times so I think I will stop here and just think, and reread, a good bit more.
It sounds like what you’re discussing is something like the fact that a “decision procedure” that maximises a “criterion of rightness” over time (rather than just in one instance) may not be “do the thing that maximises this criterion of rightness”. I got these terms from here, and then was reminded of them again by this comment (which is buried in a large thread I found hard to follow all of, but the comment has separate value for my present purposes).
In which case, again I agree. Personally I have decent credence in act utilitarianism being the criterion of rightness, but almost 0 credence that that’s the rule one should consciously follow when faced with any given decision situation (e.g., should I take public transport or an Uber? Well, increased demand for Ubers should increase prices and thus supply, increasing emissions, but on the other hand the drivers usually have relatively low income so the marginal utility of my money for them...). Act utilitarianism itself would say that the act “Consciously calculate the utility likely to come from this act, considering all consequences” has terrible expected utility in almost all situations.
So instead, I’d only consciously follow a massively simplified version of act utilitarianism for some big decisions and when initially setting (and maybe occasionally checking back in on) certain “policies” for myself that I’ll follow regularly (e.g., “use public transport regularly and Ubers just when it’s super useful, and don’t get a car, for climate change reasons”). Then the rest of the time, I follow those policies or other heuristics, which may be to embody certain “virtues” (e.g., be a nice person), which doesn’t at all imply I actually believe in virtue ethics as a criterion for rightness.
(I think this is similar to two-level utilitarianism, but I’m not sure if that theory makes as explicit a distinction between criteria of rightness and decision procedures.)
But again, I think that’s all separate from moral uncertainty (as you say “I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture”). I think that’s more like an empirical question of “Given we know x is the “right” moral theory [very hypothetically!], how should we behave to do the best thing by its lights?” And then the moral theory could indicate high choice-worthiness for the “action” of selecting some particular broad policy to follow going forward, and then later indicate high choice-worthiness for “actions” that pretty much just follow that broad policy (because, among other reasons, that saves you a lot of calculation time which you can then use for other things).
Good question. I’m working on a post on what moral uncertainty actually is (which I’ll hopefully publish tomorrow), and your question has made me realise it’d be worth having a section on the matter of “risk vs (Knightian) uncertainty” in there.
Personally, I was effectively using “uncertainty” in a sense that incorporates both of the concepts you refer to, though I hadn’t been thinking explicitly about the matter. (And some googling suggests academic work on moral uncertainty almost never mentions the potential distinction. When the term “moral risk” is used, it just means the subset of moral uncertainty where one may be doing something bad, rather than being set against something like Knightian uncertainty.)
This distinction is something I feel somewhat confused about, and it seems to me that confusion/disagreement about the distinction is fairly common more widely, including among LW/EA-related communities. E.g., the Wikipedia article on Knightian uncertainty says: “The difference between predictable variation and unpredictable variation is one of the fundamental issues in the philosophy of probability, and different probability interpretations treat predictable and unpredictable variation differently. The debate about the distinction has a long history.”
But here are some of my current (up for debate) thoughts, which aren’t meant to be authoritative or convincing. (This turned out to be a long comment, because I used writing it as a chance to try to work through these ideas, and because my thinking on this still isn’t neatly crystallised, and because I’d be interested in people’s thoughts, as that may inform how I discuss this matter in that other post I’m working on.)
Terms
My new favourite source on how the terms “risk” and “uncertainty” should probably (in my view) be used is this, from Ozzie Gooen. Ozzie notes the distinction/definitions you mention being common in business and finance, but then writes:
Ozzie goes on to say:
(Douglas Hubbard is the author of How to Measure Anything, which I’d highly recommend for general concepts and ways of thinking useful across many areas. Great summary here.)
That’s how I’d also want to use those terms, to minimise confusion (despite still causing it among some people from e.g. business and finance).
Concepts
But that was just a matter of what words are probably least likely to lead to conclusion. A separate point is whether I was talking about what you mean by “risk” or what you mean by “uncertainty” (i.e., “Knightianuncertainty”). I do think a distinction between these concepts “carves reality at the joints” to some extent—it points in the direction of something real and worth paying attention to—but I also (currently) think it’s too binary, and can be misleading. (Again, all of this comment up for debate, and I’d be interested in people checking my thinking!)
Here’s one description of the distinction (from this article):
This sort of example is common, and it seems to me that it’d make more sense to not see a difference in kind between the type of estimate/probability/uncertainty, but a difference of degree of our confidence in/basis for that estimate/probability. And this seems to be a fairly common view, at least among people using a Bayesian framework. E.g., Wikipedia states:
And in the article on Knightian uncertainty, Wikipedia says “Taleb asserts that Knightian risk does not exist in the real world, and instead finds gradations of computable risk”. I haven’t read anything by Taleb and can’t confirm that that’s his view, but that view makes a lot of sense to me.
For example, in the typical examples, we can never really know with certainty that a coin/dice/whatever is unbiased. So it can’t be the case that “the probability distribution governing that outcome is known”, except in the sense of being known probabilistically, in which case it’s not a totally different situation to a case of an “unknown” distribution, where we still may have some scrap of evidence or meaningful prior, and even if not, we can use some form of uninformative prior (e.g., uniform prior; it seems to me that uninformative priors are another debated topic, and my knowledge there is quite limited).
Something that seems relevant from here is a great quote from Tetlock, along the lines of (I can’t remember or find his exact phrasing) lots of people claiming you can’t predict x, you can’t predict y, these things are simply too unforeseeable, and Tetlock deciding to make a reference class for “Things people say you can’t predict”, with this leading to fairly good predictions. (I think this was from Superforecasting, but it might’ve been Expert Political Judgement. If anyone can remember/find the quote, please let me know.)
Usefulness of a distinction
But as I said, I do think the distinction still points in the direction of something important (I’d just want it to do so in terms of quantitative degrees, rather than qualitative kinds). Holden of GiveWell (at the time) had some relevant posts that seem quite interesting and generated a lot of discussion, e.g. this and this.
One reason I think it’s useful to consider our confidence in our estimates is the optimizer’s curse (also discussed here). If we’re choosing from a set of things based on which one we predict will be “best” (e.g., the most cost-effectiveness of a set of charities), we should expect to be disappointed, and this issue is more pronounced the more “uncertain” our estimates are (not as in “probabilities closer to 0”, but as in “Probabilities we have less basis for”). (I’m not sure how directly relevant the optimizer’s curse is for moral uncertainty, though. Maybe something more general like regression to the mean is more relevant.)
But I don’t think issues like that mean different types/levels of uncertainty need to be treated in fundamentally different ways. We can just do things like adjusting our initial estimates downwards based on our knowledge of the optimizer’s curse. Or, relatedly, model combination and adjustment, which I’ve had a go at for the Devon example in this Guesstimate model.
Concepts I find very helpful here are credence resilience and (relatedly) a distinction between the balance of the evidence and the weight of the evidence (explained here). These (I think) allow us to discuss the sort of thing “risk vs Knightian uncertainty” is trying to get at, but in a less binary way, and in a way that more clearly highlights how we should respond and other implications (e.g., that value of information will typically be higher when our credences are less “resilient”, which I’ll discuss a bit in my upcoming post on relating value of information to moral uncertainty).
Hope that wall of text (sorry!) helps clarify my current thinking on these issues. And as I said earlier, I’d be interested in people’s thoughts on what I’ve said here, to inform how I (hopefully far more concisely) discuss this matter in my later post.
To be specific regarding the model combination point (which is probably the way in which the risk vs Knightian uncertainty distinction is most likely to be relevant to moral uncertainty): I now very tentatively think it might make sense to see whatever “approach” we use to moral uncertainty (whether accounting for empirical uncertainty or not) as one “model”, and have some other model(s) as well. These other models could be an alternative approach to moral uncertainty, or a sort of “common sense” moral theory (of a form that can’t be readily represented in the e.g. MEC model, or that Devon thinks shouldn’t be represented in that model because he’s uncertain about that model), or of something like “Don’t do anything too irreversible because there might be some better moral theory out there that we haven’t heard of yet”.
Then we could combine the two (or more) models, with the option of weighting them differently if we have different levels of confidence in each one. (E.g., because we’re suspicious of the explicit MEC approach. Or on the other hand because we might think our basic, skipping-to-the-conclusion intuitions can’t be trusted much at all on matters like this, and that a more regimented approach that disaggregates/factors things out is more reliable). This is what I had a go at showing in this model.
For example, I think I’d personally find it quite plausible to take the results of (possibly normalised) MEC/MEC-E quite seriously, but to still think there’s a substantial chance of unknown unknowns, making me want to combine those results with something like a “Try to avoid extremely counter-common-sense actions or extremely irreversible consequences” model.
But I only thought of this when trying to think through how to respond to your comment (and thus re-reading things like the model combination post and the sequence vs cluster thinking post), and I haven’t seen something like it discussed in the academic work on moral uncertainty I’ve read, so all of the above is just putting an idea out there for discussion.
Thanks and lots to think about there and it has been helpful, and I think as I digest and nibble more it will provide greater understanding.
That resonates well for me. Particularly that element of rule versus maximizing/optimized approach. Not quite sure how or where that fits—and suspect this is a life-time effort to get close to fully working though and opportunities to subdivide areas (which then required all those “how do we classify...” aspects) effectively might be good.
With regards to rules, I think there is also something of an uncertainty reducing role in that rules will increase predictability of external actions. This is not a well thought out idea for me but seems correct, at least in a number of possible situations.
Look forward to reading more.
Are you talking in some like game-theoretic terms, like how pre-committing to one policy can make it easier for others to plan with that in mind, for cooperation to be achieved, to avoid extortion, etc.?
If so, that seems plausible and interesting (I hadn’t been explicitly thinking about that).
But I’d also guess that that benefit could be captured by the typical moral uncertainty approaches (whether they explicitly account for empirical uncertainty or not), as long as some theories you have credence in are at least partly consequentialist. (I.e., you may not need to use model combination and adjustment to capture this benefit, though I see no reason why it’s incompatible with model combination and adjustment either.)
Specifically, I’m thinking that the theories you have credence in could give higher choice-worthiness scores to actions that stick closer to what’s typical or you or of some broader group (e.g., all humans), or that stick closer to a policy you selected with a prior action, because of the benefits to cooperation/planning/etc. (In the “accounting for empirical uncertainty”, tweak that to the theories valuing those benefits as outcomes, and you predicting those actions will likely lead to those outcomes.)
Or you could set things up so there’s another “action” to choose from which represents “Select X policy, and sticking to it for the next few years except in Y subset of cases, in which situations you can run MEC again to see what to do”. Then it’d be something like evaluating whether to pre-commit to a form of rule utilitarianism in most cases.
But again, this is just me spit-balling—I haven’t seen this sort of thing discussed in the academic literature.
Not really a game-theoretic concept for me. The thought largely stems from an article I read over 30 years ago “The Origins of Predictable Behavior”. As I recall, it is a largely Bayesian argument for how humans started to evolve rule. One aspect (assuming I am recalling correctly) was the rule took the entire individual calculation out of the picture: we just follow the rule and don’t make any effort to optimize under certain conditions.
I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture.
I’ve tried to add a bit more but have deleted and retried now about 10 times so I think I will stop here and just think, and reread, a good bit more.
It sounds like what you’re discussing is something like the fact that a “decision procedure” that maximises a “criterion of rightness” over time (rather than just in one instance) may not be “do the thing that maximises this criterion of rightness”. I got these terms from here, and then was reminded of them again by this comment (which is buried in a large thread I found hard to follow all of, but the comment has separate value for my present purposes).
In which case, again I agree. Personally I have decent credence in act utilitarianism being the criterion of rightness, but almost 0 credence that that’s the rule one should consciously follow when faced with any given decision situation (e.g., should I take public transport or an Uber? Well, increased demand for Ubers should increase prices and thus supply, increasing emissions, but on the other hand the drivers usually have relatively low income so the marginal utility of my money for them...). Act utilitarianism itself would say that the act “Consciously calculate the utility likely to come from this act, considering all consequences” has terrible expected utility in almost all situations.
So instead, I’d only consciously follow a massively simplified version of act utilitarianism for some big decisions and when initially setting (and maybe occasionally checking back in on) certain “policies” for myself that I’ll follow regularly (e.g., “use public transport regularly and Ubers just when it’s super useful, and don’t get a car, for climate change reasons”). Then the rest of the time, I follow those policies or other heuristics, which may be to embody certain “virtues” (e.g., be a nice person), which doesn’t at all imply I actually believe in virtue ethics as a criterion for rightness.
(I think this is similar to two-level utilitarianism, but I’m not sure if that theory makes as explicit a distinction between criteria of rightness and decision procedures.)
But again, I think that’s all separate from moral uncertainty (as you say “I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture”). I think that’s more like an empirical question of “Given we know x is the “right” moral theory [very hypothetically!], how should we behave to do the best thing by its lights?” And then the moral theory could indicate high choice-worthiness for the “action” of selecting some particular broad policy to follow going forward, and then later indicate high choice-worthiness for “actions” that pretty much just follow that broad policy (because, among other reasons, that saves you a lot of calculation time which you can then use for other things).