Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn’t, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is “optional”, right?
I read about slack recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursuit of rationality. Obviously we cannot be at 100% at all times, for all these good reasons and in all these good cases! I then clicked off and found another cool concept on LessWrong.
I then randomly stumbled upon an article that offhandedly made a supererogatory moral claim. Something clicked in my brain and I thought “That’s just slack applied to morality, isn’t it?”. Enthralled by the insight, I decided this was as good an opportunity as ever to make my first Shortform. I had failed to think deeply enough about slack to actually integrate it into my beliefs. This was something to work on in the future to up my rationalist game, but I also get to pat myself on the back for realizing it.
Isn’t my acceptance of slack still in direct conflict with my current non-acceptance of supererogatory morality? And wasn’t I just about to conclude without actually reconciling the two positions?
Oh. Looks like I still have some actual work ahead of me, and some more learning to do.
I suspect that it’s a combination of a lot of things. Slack, yes. Also Goodhart’s law, in that optimizing directly for any particular expression of morality is liable to collapse it.
There are also second and greater order effects from such moral principles: people who truly believe that people must always do the single most moral thing are likely to fail to convince others to live the same way, and so reduce the total amount of good that could be done. They may also disagree on what the single most moral thing is, and suffer from factionalization and other serious breakdowns of coordination that would be less likely in people who are less dogmatic about moral necessity.
It’s a difficult problem, and certainly not one that we are going to solve any time soon.
Instead of being (the only) moral agent, imagine yourself in a role of a coordinator of the moral agents. You specify their algorithm, they execute it. Your goal is to create maximum good, but you must do it indirectly, through the agents.
Your constraints at the good maximizations are the following: Doing good requires spending some resources; sometimes a trivial amount, sometimes a significant amount. So you aim at a balance between spending resources to do good now, and saving resources to keep your agents alive and strong for a long time, so they can do more good over their entire lifetimes. Furthermore, the agents do kind of volunteer for the role, and more strict rules statistically make them less likely to volunteer. So you again aim at a balance between more good done per agent, and more agents doing good.
The result is a heuristic where some rules are mandatory to follow, and other rules are optional. The optional rules do not slow down your recruitment of moral agents, but the ones who do not mind having strict rules are provided an option to do more good.
It’s useful, but likely not valuable-in-itself for people to strive to be primarily morality optimizers. Thus the optimally moral thing could be to care about the optimally moral thing substantially less than sustainably feasible.
Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn’t, in which case you should instead do the optimally moral thing.
That’s all downstream of an implicit definition of “what I am obliged to do” as “the optimally moral thing”. If what you are obliged to do is less demandingly, then there is space for the superogatory.
Such a system doesn’t prescribe which action from that set, but in order for it to contain supererogatory actions, it has to say that some are more “morally virtuous” to others, even in that narrowed set. These are not prescriptive moral claims, though. Even though you follow this moral system, a statement “X is more morally virtuous but not prescribed” coming from this moral system is not relevant to you. The system might as well say “X is more fribble”. You won’t care either way, unless the moral system also prescribes X, in which case X isn’t supererogatory.
Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn’t, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is “optional”, right?
I read about slack recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursuit of rationality. Obviously we cannot be at 100% at all times, for all these good reasons and in all these good cases! I then clicked off and found another cool concept on LessWrong.
I then randomly stumbled upon an article that offhandedly made a supererogatory moral claim. Something clicked in my brain and I thought “That’s just slack applied to morality, isn’t it?”. Enthralled by the insight, I decided this was as good an opportunity as ever to make my first Shortform. I had failed to think deeply enough about slack to actually integrate it into my beliefs. This was something to work on in the future to up my rationalist game, but I also get to pat myself on the back for realizing it.
Isn’t my acceptance of slack still in direct conflict with my current non-acceptance of supererogatory morality? And wasn’t I just about to conclude without actually reconciling the two positions?
Oh. Looks like I still have some actual work ahead of me, and some more learning to do.
I suspect that it’s a combination of a lot of things. Slack, yes. Also Goodhart’s law, in that optimizing directly for any particular expression of morality is liable to collapse it.
There are also second and greater order effects from such moral principles: people who truly believe that people must always do the single most moral thing are likely to fail to convince others to live the same way, and so reduce the total amount of good that could be done. They may also disagree on what the single most moral thing is, and suffer from factionalization and other serious breakdowns of coordination that would be less likely in people who are less dogmatic about moral necessity.
It’s a difficult problem, and certainly not one that we are going to solve any time soon.
Second order effects, indeed.
Instead of being (the only) moral agent, imagine yourself in a role of a coordinator of the moral agents. You specify their algorithm, they execute it. Your goal is to create maximum good, but you must do it indirectly, through the agents.
Your constraints at the good maximizations are the following: Doing good requires spending some resources; sometimes a trivial amount, sometimes a significant amount. So you aim at a balance between spending resources to do good now, and saving resources to keep your agents alive and strong for a long time, so they can do more good over their entire lifetimes. Furthermore, the agents do kind of volunteer for the role, and more strict rules statistically make them less likely to volunteer. So you again aim at a balance between more good done per agent, and more agents doing good.
The result is a heuristic where some rules are mandatory to follow, and other rules are optional. The optional rules do not slow down your recruitment of moral agents, but the ones who do not mind having strict rules are provided an option to do more good.
It’s useful, but likely not valuable-in-itself for people to strive to be primarily morality optimizers. Thus the optimally moral thing could be to care about the optimally moral thing substantially less than sustainably feasible.
That’s all downstream of an implicit definition of “what I am obliged to do” as “the optimally moral thing”. If what you are obliged to do is less demandingly, then there is space for the superogatory.
If I am not obliged to do something, then why ought I do it, exactly? If it’s morally optimal, then how could I justify not doing it?
Many systems of morality are built more like “do no harm” than “do the best possible good at all times”.
That is, you are morally obliged to choose actions from a particular set in some circumstances, but they do not prescribe which action from that set.
Such a system doesn’t prescribe which action from that set, but in order for it to contain supererogatory actions, it has to say that some are more “morally virtuous” to others, even in that narrowed set. These are not prescriptive moral claims, though. Even though you follow this moral system, a statement “X is more morally virtuous but not prescribed” coming from this moral system is not relevant to you. The system might as well say “X is more fribble”. You won’t care either way, unless the moral system also prescribes X, in which case X isn’t supererogatory.
There are things that at good to do, but not obligatory.