With regards to rules, I think there is also something of an uncertainty reducing role in that rules will increase predictability of external actions. This is not a well thought out idea for me but seems correct, at least in a number of possible situations.
Are you talking in some like game-theoretic terms, like how pre-committing to one policy can make it easier for others to plan with that in mind, for cooperation to be achieved, to avoid extortion, etc.?
If so, that seems plausible and interesting (I hadn’t been explicitly thinking about that).
But I’d also guess that that benefit could be captured by the typical moral uncertainty approaches (whether they explicitly account for empirical uncertainty or not), as long as some theories you have credence in are at least partly consequentialist. (I.e., you may not need to use model combination and adjustment to capture this benefit, though I see no reason why it’s incompatible with model combination and adjustment either.)
Specifically, I’m thinking that the theories you have credence in could give higher choice-worthiness scores to actions that stick closer to what’s typical or you or of some broader group (e.g., all humans), or that stick closer to a policy you selected with a prior action, because of the benefits to cooperation/planning/etc. (In the “accounting for empirical uncertainty”, tweak that to the theories valuing those benefits as outcomes, and you predicting those actions will likely lead to those outcomes.)
Or you could set things up so there’s another “action” to choose from which represents “Select X policy, and sticking to it for the next few years except in Y subset of cases, in which situations you can run MEC again to see what to do”. Then it’d be something like evaluating whether to pre-commit to a form of rule utilitarianism in most cases.
But again, this is just me spit-balling—I haven’t seen this sort of thing discussed in the academic literature.
Not really a game-theoretic concept for me. The thought largely stems from an article I read over 30 years ago “The Origins of Predictable Behavior”. As I recall, it is a largely Bayesian argument for how humans started to evolve rule. One aspect (assuming I am recalling correctly) was the rule took the entire individual calculation out of the picture: we just follow the rule and don’t make any effort to optimize under certain conditions.
I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture.
I’ve tried to add a bit more but have deleted and retried now about 10 times so I think I will stop here and just think, and reread, a good bit more.
It sounds like what you’re discussing is something like the fact that a “decision procedure” that maximises a “criterion of rightness” over time (rather than just in one instance) may not be “do the thing that maximises this criterion of rightness”. I got these terms from here, and then was reminded of them again by this comment (which is buried in a large thread I found hard to follow all of, but the comment has separate value for my present purposes).
In which case, again I agree. Personally I have decent credence in act utilitarianism being the criterion of rightness, but almost 0 credence that that’s the rule one should consciously follow when faced with any given decision situation (e.g., should I take public transport or an Uber? Well, increased demand for Ubers should increase prices and thus supply, increasing emissions, but on the other hand the drivers usually have relatively low income so the marginal utility of my money for them...). Act utilitarianism itself would say that the act “Consciously calculate the utility likely to come from this act, considering all consequences” has terrible expected utility in almost all situations.
So instead, I’d only consciously follow a massively simplified version of act utilitarianism for some big decisions and when initially setting (and maybe occasionally checking back in on) certain “policies” for myself that I’ll follow regularly (e.g., “use public transport regularly and Ubers just when it’s super useful, and don’t get a car, for climate change reasons”). Then the rest of the time, I follow those policies or other heuristics, which may be to embody certain “virtues” (e.g., be a nice person), which doesn’t at all imply I actually believe in virtue ethics as a criterion for rightness.
(I think this is similar to two-level utilitarianism, but I’m not sure if that theory makes as explicit a distinction between criteria of rightness and decision procedures.)
But again, I think that’s all separate from moral uncertainty (as you say “I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture”). I think that’s more like an empirical question of “Given we know x is the “right” moral theory [very hypothetically!], how should we behave to do the best thing by its lights?” And then the moral theory could indicate high choice-worthiness for the “action” of selecting some particular broad policy to follow going forward, and then later indicate high choice-worthiness for “actions” that pretty much just follow that broad policy (because, among other reasons, that saves you a lot of calculation time which you can then use for other things).
Are you talking in some like game-theoretic terms, like how pre-committing to one policy can make it easier for others to plan with that in mind, for cooperation to be achieved, to avoid extortion, etc.?
If so, that seems plausible and interesting (I hadn’t been explicitly thinking about that).
But I’d also guess that that benefit could be captured by the typical moral uncertainty approaches (whether they explicitly account for empirical uncertainty or not), as long as some theories you have credence in are at least partly consequentialist. (I.e., you may not need to use model combination and adjustment to capture this benefit, though I see no reason why it’s incompatible with model combination and adjustment either.)
Specifically, I’m thinking that the theories you have credence in could give higher choice-worthiness scores to actions that stick closer to what’s typical or you or of some broader group (e.g., all humans), or that stick closer to a policy you selected with a prior action, because of the benefits to cooperation/planning/etc. (In the “accounting for empirical uncertainty”, tweak that to the theories valuing those benefits as outcomes, and you predicting those actions will likely lead to those outcomes.)
Or you could set things up so there’s another “action” to choose from which represents “Select X policy, and sticking to it for the next few years except in Y subset of cases, in which situations you can run MEC again to see what to do”. Then it’d be something like evaluating whether to pre-commit to a form of rule utilitarianism in most cases.
But again, this is just me spit-balling—I haven’t seen this sort of thing discussed in the academic literature.
Not really a game-theoretic concept for me. The thought largely stems from an article I read over 30 years ago “The Origins of Predictable Behavior”. As I recall, it is a largely Bayesian argument for how humans started to evolve rule. One aspect (assuming I am recalling correctly) was the rule took the entire individual calculation out of the picture: we just follow the rule and don’t make any effort to optimize under certain conditions.
I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture.
I’ve tried to add a bit more but have deleted and retried now about 10 times so I think I will stop here and just think, and reread, a good bit more.
It sounds like what you’re discussing is something like the fact that a “decision procedure” that maximises a “criterion of rightness” over time (rather than just in one instance) may not be “do the thing that maximises this criterion of rightness”. I got these terms from here, and then was reminded of them again by this comment (which is buried in a large thread I found hard to follow all of, but the comment has separate value for my present purposes).
In which case, again I agree. Personally I have decent credence in act utilitarianism being the criterion of rightness, but almost 0 credence that that’s the rule one should consciously follow when faced with any given decision situation (e.g., should I take public transport or an Uber? Well, increased demand for Ubers should increase prices and thus supply, increasing emissions, but on the other hand the drivers usually have relatively low income so the marginal utility of my money for them...). Act utilitarianism itself would say that the act “Consciously calculate the utility likely to come from this act, considering all consequences” has terrible expected utility in almost all situations.
So instead, I’d only consciously follow a massively simplified version of act utilitarianism for some big decisions and when initially setting (and maybe occasionally checking back in on) certain “policies” for myself that I’ll follow regularly (e.g., “use public transport regularly and Ubers just when it’s super useful, and don’t get a car, for climate change reasons”). Then the rest of the time, I follow those policies or other heuristics, which may be to embody certain “virtues” (e.g., be a nice person), which doesn’t at all imply I actually believe in virtue ethics as a criterion for rightness.
(I think this is similar to two-level utilitarianism, but I’m not sure if that theory makes as explicit a distinction between criteria of rightness and decision procedures.)
But again, I think that’s all separate from moral uncertainty (as you say “I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture”). I think that’s more like an empirical question of “Given we know x is the “right” moral theory [very hypothetically!], how should we behave to do the best thing by its lights?” And then the moral theory could indicate high choice-worthiness for the “action” of selecting some particular broad policy to follow going forward, and then later indicate high choice-worthiness for “actions” that pretty much just follow that broad policy (because, among other reasons, that saves you a lot of calculation time which you can then use for other things).