Not really a game-theoretic concept for me. The thought largely stems from an article I read over 30 years ago “The Origins of Predictable Behavior”. As I recall, it is a largely Bayesian argument for how humans started to evolve rule. One aspect (assuming I am recalling correctly) was the rule took the entire individual calculation out of the picture: we just follow the rule and don’t make any effort to optimize under certain conditions.
I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture.
I’ve tried to add a bit more but have deleted and retried now about 10 times so I think I will stop here and just think, and reread, a good bit more.
It sounds like what you’re discussing is something like the fact that a “decision procedure” that maximises a “criterion of rightness” over time (rather than just in one instance) may not be “do the thing that maximises this criterion of rightness”. I got these terms from here, and then was reminded of them again by this comment (which is buried in a large thread I found hard to follow all of, but the comment has separate value for my present purposes).
In which case, again I agree. Personally I have decent credence in act utilitarianism being the criterion of rightness, but almost 0 credence that that’s the rule one should consciously follow when faced with any given decision situation (e.g., should I take public transport or an Uber? Well, increased demand for Ubers should increase prices and thus supply, increasing emissions, but on the other hand the drivers usually have relatively low income so the marginal utility of my money for them...). Act utilitarianism itself would say that the act “Consciously calculate the utility likely to come from this act, considering all consequences” has terrible expected utility in almost all situations.
So instead, I’d only consciously follow a massively simplified version of act utilitarianism for some big decisions and when initially setting (and maybe occasionally checking back in on) certain “policies” for myself that I’ll follow regularly (e.g., “use public transport regularly and Ubers just when it’s super useful, and don’t get a car, for climate change reasons”). Then the rest of the time, I follow those policies or other heuristics, which may be to embody certain “virtues” (e.g., be a nice person), which doesn’t at all imply I actually believe in virtue ethics as a criterion for rightness.
(I think this is similar to two-level utilitarianism, but I’m not sure if that theory makes as explicit a distinction between criteria of rightness and decision procedures.)
But again, I think that’s all separate from moral uncertainty (as you say “I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture”). I think that’s more like an empirical question of “Given we know x is the “right” moral theory [very hypothetically!], how should we behave to do the best thing by its lights?” And then the moral theory could indicate high choice-worthiness for the “action” of selecting some particular broad policy to follow going forward, and then later indicate high choice-worthiness for “actions” that pretty much just follow that broad policy (because, among other reasons, that saves you a lot of calculation time which you can then use for other things).
Not really a game-theoretic concept for me. The thought largely stems from an article I read over 30 years ago “The Origins of Predictable Behavior”. As I recall, it is a largely Bayesian argument for how humans started to evolve rule. One aspect (assuming I am recalling correctly) was the rule took the entire individual calculation out of the picture: we just follow the rule and don’t make any effort to optimize under certain conditions.
I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture.
I’ve tried to add a bit more but have deleted and retried now about 10 times so I think I will stop here and just think, and reread, a good bit more.
It sounds like what you’re discussing is something like the fact that a “decision procedure” that maximises a “criterion of rightness” over time (rather than just in one instance) may not be “do the thing that maximises this criterion of rightness”. I got these terms from here, and then was reminded of them again by this comment (which is buried in a large thread I found hard to follow all of, but the comment has separate value for my present purposes).
In which case, again I agree. Personally I have decent credence in act utilitarianism being the criterion of rightness, but almost 0 credence that that’s the rule one should consciously follow when faced with any given decision situation (e.g., should I take public transport or an Uber? Well, increased demand for Ubers should increase prices and thus supply, increasing emissions, but on the other hand the drivers usually have relatively low income so the marginal utility of my money for them...). Act utilitarianism itself would say that the act “Consciously calculate the utility likely to come from this act, considering all consequences” has terrible expected utility in almost all situations.
So instead, I’d only consciously follow a massively simplified version of act utilitarianism for some big decisions and when initially setting (and maybe occasionally checking back in on) certain “policies” for myself that I’ll follow regularly (e.g., “use public transport regularly and Ubers just when it’s super useful, and don’t get a car, for climate change reasons”). Then the rest of the time, I follow those policies or other heuristics, which may be to embody certain “virtues” (e.g., be a nice person), which doesn’t at all imply I actually believe in virtue ethics as a criterion for rightness.
(I think this is similar to two-level utilitarianism, but I’m not sure if that theory makes as explicit a distinction between criteria of rightness and decision procedures.)
But again, I think that’s all separate from moral uncertainty (as you say “I don’t think this is really an alternative approach—perhaps a complementary aspect or partial element in the bigger picture”). I think that’s more like an empirical question of “Given we know x is the “right” moral theory [very hypothetically!], how should we behave to do the best thing by its lights?” And then the moral theory could indicate high choice-worthiness for the “action” of selecting some particular broad policy to follow going forward, and then later indicate high choice-worthiness for “actions” that pretty much just follow that broad policy (because, among other reasons, that saves you a lot of calculation time which you can then use for other things).