The idea is, normally just do straightforwardly good things. Be cooperative, friendly, and considerate. Embrace the standard virtues. Don’t stress about the global impacts or second-order altruistic effects of minor decisions. But also identify the very small fraction of your decisions which are likely to have the largest effects and put a lot of creative energy into doing the best you can.
I agree with this, but would add that IMO, after you work out the consequentialist analysis of the small set of decisions that are worth intensive thought/effort/research, it is quite worthwhile to additionally work out something like a folk ethical account of why your result is correct, or of how the action you’re endorsing coheres with deep virtues/deontology/tropes/etc.
There are two big upsides to this process:
As you work this out, you get some extra checks on your reasoning—maybe folk ethics sees something you’re missing here; and
At least as importantly: a good folk ethical account will let individuals and groups cohere around the proposed action, in a simple, conscious, wanting-the-good-thing-together way, without needing to dissociate from what they’re doing (whereas accounts like “it’s worth dishonesty in this one particular case” will be harder to act on wholeheartedly, even when basically correct). And this will work a lot better.
IMO, this is similar to: in math, we use heuristics and intuitions and informal reasoning a lot, to guess how to do things—and we use detailed, not-condensed-by-heuristics algebra or mathematical proof steps sometimes also, to work out how a thing goes that we don’t yet find intuitive or obvious. But after writing a math proof the sloggy way, it’s good to go back over it, look for “why it worked,” “what was the true essence of the proof, that made it tick,” and see if there is now a way to “see it at a glance,” to locate ways of seeing that will make future such situations more obvious, and that can live in one’s system 1 and aesthetics as well as in one’s sloggy explicit reasoning.
Or, again, in coding: usually we can use standard data structures and patterns. Sometimes we have to hand-invent something new. But after coming up with the something new: it’s good, often, to condense it into readily parsable/remember-able/re-useable stuff, instead of hand spaghetti code.
Or, in physics and many other domains: new results are sometimes counterintuitive, but it is advisable to then work out intuitions whereby reality may be more intuitive in the future.
I don’t have my concepts well worked out here yet, which is why I’m being so long-winded and full of analogies. But I’m pretty sure that folk ethics, where we have it worked out, has a bunch of advantages over consequentialist reasoning that’re kind of like those above.
As the OP notes, folk ethics can be applied to hundreds of decisions per day, without much thought per each;
As the OP notes, folk ethics have been tested across huge numbers of past actions by huge numbers of people. New attempts at folk ethical reasoning can’t have this advantage fully. But: I think when things are formulated simply enough, or enough in the language of folk ethics, we can back-apply them a lot more on a lot of known history and personally experienced anecdotes ad so on (since they are quick to apply, as in the above bullet point), and can get at least some of the “we still like this heuristic after considering it in a lot of different contexts with known outcomes” advantage.
As OP implies, folk ethics is more robust to a lot of the normal human bias temptations (“x must be right, because I’d find it more convenient right this minute”) compared to case-by-case reasoning;
It is easier for us humans to work hard on something, in a stable fashion, when we can see in our hearts that it is good, and can see how it relates to everything else we care about. Folk ethics helps with this. Maybe folk ethics, and notions of virtue and so on, kind of are takes on how a given thing can fit together with all the little decisions and all the competing pulls as to what’s good? E.g. the OP lists as examples of commonsense goods “patience, respect, humility, moderation, kindness, honesty”—and all of these are pretty usable guides to how to be while I care about something, and to how to relate that caring to l my other cares and goals.
I suspect there’s something particularly good here with groups. We humans often want to be part of groups that can work toward a good goal across a long period of time, while maintaining integrity, and this is often hard because groups tend to degenerate with time into serving individuals’ local power, becoming moral fads, or other things that aren’t as good as the intended purpose. Ethics, held in common by the group’s common sense, is a lot of how this is ever avoided, I think; and this is more feasible if the group is trying to serve a thing whose folk ethics (or “commonsense good”) has been worked out, vs something that hasn’t.
For a concrete example:
AI safety obviously matters. The folk ethics of “don’t let everyone get killed if you can help it” are solid, so that part’s fine. But in terms of tactics: I really think we need to work out a “commonsense good” or “folk ethics” type account of things like:
Is it okay to try to get lots of power, by being first to AI and trying to make use of that power to prevent worse AI outcomes? (My take: maybe somehow, but I haven’t seen the folk ethics worked out, and a good working out would give a lot of checks here, I think.)
Is it okay to try to suppress risky research, e.g. via frowning at people and telling them that only bad people do AI research, so as to try to delay tech that might kill everyone? (My take: probably, on my guess—but a good folk ethics would bring structure and intuitions somehow, like, it would work out how this is different from other kinds of “discourage people from talking and figuring things out,” it would have perceivable virtues or something for noticing the differences, which would help people then track the differences on the group commonsense level in ways that help the group’s commonsense not erode its general belief in the goodness of people sharing information and doing things).
I agree with this, but would add that IMO, after you work out the consequentialist analysis of the small set of decisions that are worth intensive thought/effort/research, it is quite worthwhile to additionally work out something like a folk ethical account of why your result is correct, or of how the action you’re endorsing coheres with deep virtues/deontology/tropes/etc.
There are two big upsides to this process:
As you work this out, you get some extra checks on your reasoning—maybe folk ethics sees something you’re missing here; and
At least as importantly: a good folk ethical account will let individuals and groups cohere around the proposed action, in a simple, conscious, wanting-the-good-thing-together way, without needing to dissociate from what they’re doing (whereas accounts like “it’s worth dishonesty in this one particular case” will be harder to act on wholeheartedly, even when basically correct). And this will work a lot better.
IMO, this is similar to: in math, we use heuristics and intuitions and informal reasoning a lot, to guess how to do things—and we use detailed, not-condensed-by-heuristics algebra or mathematical proof steps sometimes also, to work out how a thing goes that we don’t yet find intuitive or obvious. But after writing a math proof the sloggy way, it’s good to go back over it, look for “why it worked,” “what was the true essence of the proof, that made it tick,” and see if there is now a way to “see it at a glance,” to locate ways of seeing that will make future such situations more obvious, and that can live in one’s system 1 and aesthetics as well as in one’s sloggy explicit reasoning.
Or, again, in coding: usually we can use standard data structures and patterns. Sometimes we have to hand-invent something new. But after coming up with the something new: it’s good, often, to condense it into readily parsable/remember-able/re-useable stuff, instead of hand spaghetti code.
Or, in physics and many other domains: new results are sometimes counterintuitive, but it is advisable to then work out intuitions whereby reality may be more intuitive in the future.
I don’t have my concepts well worked out here yet, which is why I’m being so long-winded and full of analogies. But I’m pretty sure that folk ethics, where we have it worked out, has a bunch of advantages over consequentialist reasoning that’re kind of like those above.
As the OP notes, folk ethics can be applied to hundreds of decisions per day, without much thought per each;
As the OP notes, folk ethics have been tested across huge numbers of past actions by huge numbers of people. New attempts at folk ethical reasoning can’t have this advantage fully. But: I think when things are formulated simply enough, or enough in the language of folk ethics, we can back-apply them a lot more on a lot of known history and personally experienced anecdotes ad so on (since they are quick to apply, as in the above bullet point), and can get at least some of the “we still like this heuristic after considering it in a lot of different contexts with known outcomes” advantage.
As OP implies, folk ethics is more robust to a lot of the normal human bias temptations (“x must be right, because I’d find it more convenient right this minute”) compared to case-by-case reasoning;
It is easier for us humans to work hard on something, in a stable fashion, when we can see in our hearts that it is good, and can see how it relates to everything else we care about. Folk ethics helps with this. Maybe folk ethics, and notions of virtue and so on, kind of are takes on how a given thing can fit together with all the little decisions and all the competing pulls as to what’s good? E.g. the OP lists as examples of commonsense goods “patience, respect, humility, moderation, kindness, honesty”—and all of these are pretty usable guides to how to be while I care about something, and to how to relate that caring to l my other cares and goals.
I suspect there’s something particularly good here with groups. We humans often want to be part of groups that can work toward a good goal across a long period of time, while maintaining integrity, and this is often hard because groups tend to degenerate with time into serving individuals’ local power, becoming moral fads, or other things that aren’t as good as the intended purpose. Ethics, held in common by the group’s common sense, is a lot of how this is ever avoided, I think; and this is more feasible if the group is trying to serve a thing whose folk ethics (or “commonsense good”) has been worked out, vs something that hasn’t.
For a concrete example:
AI safety obviously matters. The folk ethics of “don’t let everyone get killed if you can help it” are solid, so that part’s fine. But in terms of tactics: I really think we need to work out a “commonsense good” or “folk ethics” type account of things like:
Is it okay to try to get lots of power, by being first to AI and trying to make use of that power to prevent worse AI outcomes? (My take: maybe somehow, but I haven’t seen the folk ethics worked out, and a good working out would give a lot of checks here, I think.)
Is it okay to try to suppress risky research, e.g. via frowning at people and telling them that only bad people do AI research, so as to try to delay tech that might kill everyone? (My take: probably, on my guess—but a good folk ethics would bring structure and intuitions somehow, like, it would work out how this is different from other kinds of “discourage people from talking and figuring things out,” it would have perceivable virtues or something for noticing the differences, which would help people then track the differences on the group commonsense level in ways that help the group’s commonsense not erode its general belief in the goodness of people sharing information and doing things).
Just posted a comment in part in response to you (but not enough to post it as a response) and would love to have your thoughts!