What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population. If you’re trying to work out what you mean by “morality” and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of “morality” you could hope to find. I might add, in [almost] exactly the same way that a program which writes and then executes a program to add two numbers is, in fact, itself a program that adds two numbers.
You can write out your final definition in legalese later, if the situation calls for it.
What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population.
Morality comes with an implicit rule; when it says that “this action is the right action to take in this situation”, then the implicit rule is “if you find yourself in this situation, take this action”. There is usually no Morality Policeman ready to administer punishment if the rule is not followed, and the choice to follow the rule or not remains; but the rule is there.
f you’re trying to work out what you mean by “morality” and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of “morality” you could hope to find.
The difficulty is that I know that the algorithm that I am following is very likely not to fulfil the criteria in the very best possible way; merely in (more or less) the best possible way that they have been fulfilled in the past. If I simply list the criteria, then I falsely imply that the chosen system of morality is the best fit for those criteria; and I am trying to avoid that implication.
What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population. If you’re trying to work out what you mean by “morality” and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of “morality” you could hope to find. I might add, in [almost] exactly the same way that a program which writes and then executes a program to add two numbers is, in fact, itself a program that adds two numbers.
You can write out your final definition in legalese later, if the situation calls for it.
Morality comes with an implicit rule; when it says that “this action is the right action to take in this situation”, then the implicit rule is “if you find yourself in this situation, take this action”. There is usually no Morality Policeman ready to administer punishment if the rule is not followed, and the choice to follow the rule or not remains; but the rule is there.
The difficulty is that I know that the algorithm that I am following is very likely not to fulfil the criteria in the very best possible way; merely in (more or less) the best possible way that they have been fulfilled in the past. If I simply list the criteria, then I falsely imply that the chosen system of morality is the best fit for those criteria; and I am trying to avoid that implication.